cooling door

2008-03-29 Thread Paul Vixie

page 10 and 11 of http://www.panduit.com/products/brochures/105309.pdf says
there's a way to move 20kW of heat away from a rack if your normal CRAC is
moving 10kW (it depends on that basic air flow), permitting six blade servers
in a rack.  panduit licensed this tech from IBM a couple of years ago.  i am
intrigued by the possible drop in total energy cost per delivered kW, though
in practice most datacenters can't get enough utility and backup power to run
at this density.  if cooling doors were to take off, we'd see data centers
partitioned off and converted to cubicles.
-- 
Paul Vixie


ATT lack of customer service

2008-03-29 Thread Patrick Torney

Hi folks:

This is my first posting to this email group. If I am at the wrong place for 
what I'm asking, I humbly ask for the your collective indulgence.

I am having a problem with a service order with ATT and I am getting nowhere 
with their sales organization. My repeated inquires go unheeded for days, even 
weeks on end. And it's been going on for over 5 months.

Can any of our distinguished ATT colleagues (or anyone else within the NANOG 
community for that matter) contact me off line? I am hoping someone can provide 
me with any contact info for some higher power inside ATT, either within their 
sales organization or at an executive level, to whom I may address my 
grievances? I would be most grateful.

Thank you in advance.

Kind Regards,
Patrick Torney
Trion World Network


Re: cooling door

2008-03-29 Thread John Curran

At 3:17 PM + 3/29/08, Paul Vixie wrote:
page 10 and 11 of http://www.panduit.com/products/brochures/105309.pdf says
there's a way to move 20kW of heat away from a rack if your normal CRAC is
moving 10kW (it depends on that basic air flow), permitting six blade servers
in a rack.  panduit licensed this tech from IBM a couple of years ago.  i am
intrigued by the possible drop in total energy cost per delivered kW, though
in practice most datacenters can't get enough utility and backup power to run
at this density.

While the chilled water door will provide higher equipment
density per rack, it relies on water piping back to a Cooling
Distribution Unit (CDU) which is in the corner sitting by your
CRAC/CRAH units.  Whether this is actually more efficient
depends quite a bit on the (omitted) specifications for that
unit...I know that it would have to be quite a bit before
many folks would: 1) introduce another cooling system
(with all the necessary redundancy), and 2) put pressurized
water in the immediate vicinity of any computer equipment.

/John



Re: cooling door

2008-03-29 Thread Jon Lewis


On Sat, 29 Mar 2008, John Curran wrote:


unit...I know that it would have to be quite a bit before
many folks would: 1) introduce another cooling system
(with all the necessary redundancy), and 2) put pressurized
water in the immediate vicinity of any computer equipment.


What could possibly go wrong?  :)
If it leaks, you get the added benefits of conductive and evaporative 
cooling.


--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_


Re: cooling door

2008-03-29 Thread Alex Pilosov

On 29 Mar 2008, Paul Vixie wrote:

 
 page 10 and 11 of http://www.panduit.com/products/brochures/105309.pdf
 says there's a way to move 20kW of heat away from a rack if your normal
 CRAC is moving 10kW (it depends on that basic air flow), permitting six
 blade servers in a rack.  panduit licensed this tech from IBM a couple
 of years ago.  i am intrigued by the possible drop in total energy cost
 per delivered kW, though in practice most datacenters can't get enough
 utility and backup power to run at this density.  if cooling doors
 were to take off, we'd see data centers partitioned off and converted to
 cubicles.
Can someone please, pretty please with sugar on top, explain the point
behind high power density? 


Raw real estate is cheap (basically, nearly free). Increasing power
density per sqft will *not* decrease cost, beyond 100W/sqft, the real
estate costs are a tiny portion of total cost. Moving enough air to cool
400 (or, in your case, 2000) watts per square foot is *hard*.

I've started to recently price things as cost per square amp. (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.

To rephrase vijay, what is the problem being solved?

[not speaking as mlc anything]



Re: cooling door

2008-03-29 Thread Paul Vixie

 Can someone please, pretty please with sugar on top, explain the point
 behind high power density? 

maybe.

 Raw real estate is cheap (basically, nearly free).

not in downtown palo alto.  now, you could argue that downtown palo alto
is a silly place for an internet exchange.  or you could note that conditions
giving rise to high and diverse longhaul and metro fiber density, also give
rise to high real estate costs.

 Increasing power density per sqft will *not* decrease cost, beyond
 100W/sqft, the real estate costs are a tiny portion of total cost. Moving
 enough air to cool 400 (or, in your case, 2000) watts per square foot is
 *hard*.

if you do it the old way, which is like you said, moving air, that's always
true.  but, i'm not convinced that we're going to keep doing it the old way.

 I've started to recently price things as cost per square amp. (That is,
 1A power, conditioned, delivered to the customer rack and cooled). Space
 is really irrelevant - to me, as colo provider, whether I have 100A going
 into a single rack or 5 racks, is irrelevant. In fact, my *costs*
 (including real estate) are likely to be lower when the load is spread
 over 5 racks. Similarly, to a customer, all they care about is getting
 their gear online, and can care less whether it needs to be in 1 rack or
 in 5 racks.
 
 To rephrase vijay, what is the problem being solved?

if you find me 300Ksqft along the caltrain fiber corridor in the peninsula
where i can get 10mW of power and have enough land around it for 10mW worth
of genset, and the price per sqft is low enough that i can charge by the
watt and floor space be damned and still come out even or ahead, then please
do send me the address.


RE: cooling door

2008-03-29 Thread michael.dillon

 Can someone please, pretty please with sugar on top, explain 
 the point behind high power density? 

It allows you to market your operation as a data center. If
you spread it out to reduce power density, then the logical 
conclusion is to use multiple physical locations. At that point
you are no longer centralized.

In any case, a lot of people are now questioning the traditional
data center model from various angles. The time is ripe for a 
paradigm change. My theory is that the new paradigm will be centrally
managed, because there is only so much expertise to go around. But
the racks will be physically distributed, in virtually every office 
building, because some things need to be close to local users. The
high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
trend will make it much easier to place an application without
worrying about the exact locations of the physical servers.

Back in the old days, small ISPs set up PoPs by finding a closet 
in the back room of a local store to set up modem banks. In the 21st
century folks will be looking for corporate data centers with room
for a rack or two of multicore CPUs running XEN, and Opensolaris
SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.

--Michael Dillon


Re: cooling door

2008-03-29 Thread Paul Vixie

[EMAIL PROTECTED] (John Curran) writes:

 While the chilled water door will provide higher equipment
 density per rack, it relies on water piping back to a Cooling
 Distribution Unit (CDU) which is in the corner sitting by your
 CRAC/CRAH units.

it just has to sit near the chilled water that moves the heat to
the roof.  that usually means CRAC-adjacency but other arrangements
are possible.

 I know that it would have to be quite a bit before
 many folks would: 1) introduce another cooling system
 (with all the necessary redundancy), and 2) put pressurized
 water in the immediate vicinity of any computer equipment.

the pressure differential between the pipe and atmospheric isn't
that much.  nowhere near steam or hydraulic pressures.  if it gave
me ~1500w/SF in a dense urban neighborhood i'd want to learn more.
-- 
Paul Vixie


Re: cooling door

2008-03-29 Thread John Curran

At 7:06 PM + 3/29/08, Paul Vixie wrote:
  While the chilled water door will provide higher equipment
 density per rack, it relies on water piping back to a Cooling
 Distribution Unit (CDU) which is in the corner sitting by your
 CRAC/CRAH units.

it just has to sit near the chilled water that moves the heat to
the roof.  that usually means CRAC-adjacency but other arrangements
are possible.

When one of the many CRAC units decides to fail in an air-cooled
environment, another one starts up and everything is fine.   The
nominal worse case leaves the failed CRAC unit as a potential air
pressure leakage source for the raised-floor and/or ductwork, but
that's about it.

Chilled water to the rack implies multiple CDU's with a colorful
hose and valve system within the computer room (effectively a
miniature version of the facility chilled water loop).  Trying to
eliminate potential failure modes in that setup will be quite the
adventure, which depending on your availability target may be
a non-issue or a great reason to consider moving to new space.

/John


Re: cooling door

2008-03-29 Thread Patrick Giagnocavo


John Curran wrote:


Chilled water to the rack implies multiple CDU's with a colorful
hose and valve system within the computer room (effectively a
miniature version of the facility chilled water loop).  Trying to
eliminate potential failure modes in that setup will be quite the
adventure, which depending on your availability target may be
a non-issue or a great reason to consider moving to new space.


Actually it wouldn't have to be pressurized at all if you located a 
large tank containing chilled water above and to the side, with a 
no-kink, straight line to the tank.  N+1 chiller units could feed the tank.


Thermo-siphoning would occur (though usually done with a cold line at 
the bottom and a return, warmed line at the top of the cooling device) 
as the warm water rises to the chilled tank and more chilled water flows 
down to the intake.


You would of course have to figure out how to monitor/cut off/contain 
any leaks. Advantage is that cooling would continue up to the limit of 
the BTUs stored in the chilled water tank, even in the absence of power.


Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]


RE: cooling door

2008-03-29 Thread Frank Coluccio

Michael Dillon is spot on when he states the following (quotation below),
although he could have gone another step in suggesting how the distance
insensitivity of fiber could be further leveraged:

The high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. 

In fact, those same servers, and a host of other storage and network elements,
can be returned to the LAN rooms and closets of most commercial buildings from
whence they originally came prior to the large-scale data center consolidations
of the current millennium, once organizations decide to free themselves of the
100-meter constraint imposed by UTP-based LAN hardware and replace those LANs
with collapsed fiber backbone designs that attach to remote switches (which 
could
be either in-building or remote), instead of the minimum two switches on every
floor that has become customary today. 

We often discuss the empowerment afforded by optical technology, but we've 
barely
scratched the surface of its ability to effect meaningful architectural changes.
The earlier prospects of creating consolidated data centers were once
near-universally considered timely and efficient, and they still are in many
respects. However, now that the problems associated with a/c and power have
entered into the calculus, some data center design strategies are beginning to
look more like anachronisms that have been caught in a whip-lash of rapidly
shifting conditions, and in a league with the constraints that are imposed by 
the
now-seemingly-obligatory 100-meter UTP design. 

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sat Mar 29 13:57 ,  sent:


 Can someone please, pretty please with sugar on top, explain 
 the point behind high power density? 

It allows you to market your operation as a data center. If
you spread it out to reduce power density, then the logical 
conclusion is to use multiple physical locations. At that point
you are no longer centralized.

In any case, a lot of people are now questioning the traditional
data center model from various angles. The time is ripe for a 
paradigm change. My theory is that the new paradigm will be centrally
managed, because there is only so much expertise to go around. But
the racks will be physically distributed, in virtually every office 
building, because some things need to be close to local users. The
high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
trend will make it much easier to place an application without
worrying about the exact locations of the physical servers.

Back in the old days, small ISPs set up PoPs by finding a closet 
in the back room of a local store to set up modem banks. In the 21st
century folks will be looking for corporate data centers with room
for a rack or two of multicore CPUs running XEN, and Opensolaris
SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.

--Michael Dillon






RE: cooling door

2008-03-29 Thread david raistrick


On Sat, 29 Mar 2008, Frank Coluccio wrote:

In fact, those same servers, and a host of other storage and network 
elements, can be returned to the LAN rooms and closets of most 
commercial buildings from whence they originally came prior to the


How does that work?  So now we buy a whole bunch of tiny gensets, and a 
whole bunch of baby UPSen and smaller cooling units to support little 
datacenters?  Not to mention diverse paths to each point..


Didn't we (the customers) try that already and realize that it's rather 
unmanagable?



I suppose the maintenance industry would love the surge in extra 
contracts to keep all the gear running




..david


---
david raistrickhttp://www.netmeister.org/news/learn2quote.html
[EMAIL PROTECTED] http://www.expita.com/nomime.html



RE: cooling door

2008-03-29 Thread Frank Coluccio

I referenced LAN rooms as an expedient and to highlight an irony. The point is,
smaller, less-concentrated, distributed enclosures suffice nicely for many
purposes, similar to how Google's distributed containers and Sun Micro's Data
Centers in a box do. And while LAN rooms that have been vacated, as a result of
using collapsed fiber, might fit these needs, since they would have been already
powered and conditioned in many cases, those could actually be reclaimed by
tenants and landlords as usable floor space in many cases.

I suppose the maintenance industry would love the surge in extra 
contracts to keep all the gear running

Your supposition is open to wide interpretation. I'll take it to mean that you
think more gear, not less, will require maintenance. Maybe in some cases, but
in the vast majority not.

Consider a multi-story commercial building that is entirely devoid of UTP-based
switches, but instead is supported over fiber to a colo or managed service
provider location. Why would this building require L2/3 aggregation switches and
routers, simply to get in and out, if it hasn't any access switches inside? It
wouldn't require any routers, is my point. This reduces the number of boxes
reqired by a factor of two or more, since I no longer require routers onsite, 
and
I no longer require their mates in the upstream or colos. I wouldn't require a
classical in-building L3 hierarchy employing high-end routers at the 
distribution
and core levels at all, or I'd require a lot fewer of them. Extending this
rationale further, the level of logistics and LAN element administration 
required
to keep on-prem applications humming is also reduced, ir not eliminated, and/or
could easily be sourced more efficiently elsewhere. I.e., from a CLI or Web
browser the LAN admin could be doing her thing from Mumbai (and in some cases
this is already being done) or from home. So there's actually less gear to
manage, not more.

I realize this isn't a one-size-fits all model, and I didn't intend to make it
appear that it was. But for the vast majority of enterprise buildings with
tenants occupying large contiguous areas, I think it makes a great deal of 
sense,
or at least would be worth evaluating to determine if it does. 

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sat Mar 29 18:30 , david raistrick  sent:

On Sat, 29 Mar 2008, Frank Coluccio wrote:

 In fact, those same servers, and a host of other storage and network 
 elements, can be returned to the LAN rooms and closets of most 
 commercial buildings from whence they originally came prior to the

How does that work?  So now we buy a whole bunch of tiny gensets, and a 
whole bunch of baby UPSen and smaller cooling units to support little 
datacenters?  Not to mention diverse paths to each point..

Didn't we (the customers) try that already and realize that it's rather 
unmanagable?


I suppose the maintenance industry would love the surge in extra 
contracts to keep all the gear running


I suppose the maintenance industry would love the surge in extra 
contracts to keep all the gear running
..david


---
david raistrickhttp://www.netmeister.org/news/learn2quote.html
[EMAIL PROTECTED] http://www.expita.com/nomime.html





latency (was: RE: cooling door)

2008-03-29 Thread Mikael Abrahamsson


On Sat, 29 Mar 2008, Frank Coluccio wrote:


We often discuss the empowerment afforded by optical technology, but we've 
barely
scratched the surface of its ability to effect meaningful architectural changes.


If you talk to the server people, they have an issue with this:

Latency.

I've talked to people who have collapsed layers in their LAN because they 
can see performance degradation for each additional switch packets have to 
pass in their NFS-mount. Yes, higher speeds means lower serialisation 
delay, but there is still a lookup time involved and 10GE is 
substantionally more expensive than GE.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: latency (was: RE: cooling door)

2008-03-29 Thread Frank Coluccio

Please clarify. To which network element are you referring in connection with
extended lookup times? Is it the collapsed optical backbone switch, or the
upstream L3 element, or perhaps both?

Certainly, some applications will demand far less latency than others. Gamers 
and
some financial (program) traders, for instance, will not tolerate delays caused
by access provisions that are extended over vast WAN, or even large Metro,
distances. But in a local/intramural setting, where optical paths amount to no
more than a klick or so, the impact is almost negligible, even to the class of
users mentioned above. Worst case, run the enterprise over the optical model and
treat those latency-sensitive users as the one-offs that they actually are by
tying them into colos that are closer to their targets. That's what a growing
number of financial firms from around the country have done in NY and CHI colos,
in any case.

As for cost, while individual ports may be significantly more expensive in one
scenario than another, the architectural decision is seldom based on a single
element cost. It's the TCO of all architectural considerations that must be 
taken
into account. Going back to my original multi-story building example-- better
yet, let's use one of the forty-story structures now being erected at Ground 
Zero
as a case in point: 

When all is said and done it will have created a minimum of two internal data
centers (main/backup/load-sharing) and a minimum of eighty (80) LAN enclosures,
with each room consisting of two L2 access switches (where each of the latter
possesses multiple 10Gbps uplinks, anyway), UPS/HVAC/Raised flooring,
firestopping, sprinklers, and a commitment to consume power for twenty years in
order to keep all this junk purring. I think you see my point. 

So even where cost may appear to be the issue when viewing cost comparisons of
discreet elements, in most cases that qualify for this type of design, i.e. 
where
an organization reaches critical mass beyond so many users, I submit that it
really is not an issue. In fact, a pervasively-lighted environment may actually
cost far less.

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sat Mar 29 19:20 , Mikael Abrahamsson  sent:


On Sat, 29 Mar 2008, Frank Coluccio wrote:

 We often discuss the empowerment afforded by optical technology, but we've 
 barely
 scratched the surface of its ability to effect meaningful architectural 
 changes.

If you talk to the server people, they have an issue with this:

Latency.

I've talked to people who have collapsed layers in their LAN because they 
can see performance degradation for each additional switch packets have to 
pass in their NFS-mount. Yes, higher speeds means lower serialisation 
delay, but there is still a lookup time involved and 10GE is 
substantionally more expensive than GE.

-- 
Mikael Abrahamssonemail: [EMAIL PROTECTED]




Re: latency (was: RE: cooling door)

2008-03-29 Thread Mikael Abrahamsson


On Sat, 29 Mar 2008, Frank Coluccio wrote:


Please clarify. To which network element are you referring in connection with
extended lookup times? Is it the collapsed optical backbone switch, or the
upstream L3 element, or perhaps both?


I am talking about the matter that the following topology:

server - 5 meter UTP - switch - 20 meter fiber - switch - 20 meter 
fiber - switch - 5 meter UTP - server


has worse NFS performance than:

server - 25 meter UTP - switch - 25 meter UTP - server

Imagine bringing this into metro with 1-2ms delay instead of 0.1-0.5ms.

This is one of the issues that the server/storage people have to deal 
with.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: cooling door

2008-03-29 Thread Robert Boyle


At 02:11 PM 3/29/2008, Alex Pilosov wrote:

Can someone please, pretty please with sugar on top, explain the point
behind high power density?


More equipment in your existing space means more revenue and more profit.


Raw real estate is cheap (basically, nearly free). Increasing power
density per sqft will *not* decrease cost, beyond 100W/sqft, the real
estate costs are a tiny portion of total cost. Moving enough air to cool
400 (or, in your case, 2000) watts per square foot is *hard*.


It depends on where you are located, but I understand what you are 
saying. However, the space is the cheap part. Installing the 
electrical power, switchgear, ATS gear, Gensets, UPS units, power 
distribution, cable/fiber distribution, connectivity to the 
datacenter, core and distribution routers/switches are all basically 
stepped incremental costs. If you can leverage the existing floor 
infrastructure then you maximize the return on your investment.



I've started to recently price things as cost per square amp. (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.


I don't disagree with what you have written above, but if you can get 
100A into all 5 racks (and cool it!), then you have five times the 
revenue with the same fixed infrastructure costs (with the exception 
of a bit more power, GenSet, UPS and cooling, but the rest of my 
costs stay the same.)



To rephrase vijay, what is the problem being solved?


For us in our datacenters, the problem being solved is getting as 
much return out of our investment as possible.


-Robert



Tellurian Networks - Global Hosting Solutions Since 1995
http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
Well done is better than well said. - Benjamin Franklin



Re: latency (was: RE: cooling door)

2008-03-29 Thread Adrian Chadd

On Sun, Mar 30, 2008, Mikael Abrahamsson wrote:
 
 On Sat, 29 Mar 2008, Frank Coluccio wrote:
 
 Please clarify. To which network element are you referring in connection 
 with
 extended lookup times? Is it the collapsed optical backbone switch, or the
 upstream L3 element, or perhaps both?
 
 I am talking about the matter that the following topology:
 
 server - 5 meter UTP - switch - 20 meter fiber - switch - 20 meter 
 fiber - switch - 5 meter UTP - server
 
 has worse NFS performance than:
 
 server - 25 meter UTP - switch - 25 meter UTP - server
 
 Imagine bringing this into metro with 1-2ms delay instead of 0.1-0.5ms.
 
 This is one of the issues that the server/storage people have to deal 
 with.

Thats because the LAN protocols need to be re-jiggled a little to start
looking less like LAN protocols and more like WAN protocols. Similar
things need to happen for applications.

I helped a friend debug an NFS throughput issue between some Linux servers
running Fortran-77 based numerical analysis code and a 10GE storage backend.
The storage backend can push 10GE without too much trouble but the application
wasn't poking the kernel in the right way (large fetches and prefetching, 
basically)
to fully utilise the infrastructure.

Oh, and kernel hz tickers can have similar effects on network traffic, if the
application does dumb stuff. If you're (un)lucky then you may see 1 or 2ms
of delay between packet input and scheduling processing. This doesn't matter
so much over 250ms + latent links but matters on 0.1ms - 1ms latent links.

(Can someone please apply some science to this and publish best practices 
please?)



adrian



Re: latency (was: RE: cooling door)

2008-03-29 Thread Frank Coluccio

Understandably, some applications fall into a class that requires very-short
distances for the reasons you cite, although I'm still not comfortable with the
setup you've outlined. Why, for example, are you showing two Ethernet switches
for the fiber option (which would naturally double the switch-induced latency),
but only a single switch for the UTP option?  

Now, I'm comfortable in ceding this point. I should have made allowances for 
this
type of exception in my introductory post, but didn't, as I also omitted mention
of other considerations for the sake of brevity. For what it's worth, 
propagation
over copper is faster propagation over fiber, as copper has a higher nominal
velocity of propagation (NVP) rating than does fiber, but not significantly
greater to cause the difference you've cited. 

As an aside, the manner in which o-e-o and e-o-e conversions take place when
transitioning from electronic to optical states, and back, affects latency
differently across differing link assembly approaches used. In cases where 
10Gbps
or greater is being sent across a multi-mode fiber link in a data center or
other in-building venue, for instance, parallel optics are most ofen used,
i.e., multiple optical channels (either fibers or wavelengths) that undergo
multiplexing and de-multiplexing (collectively: inverse multiplexing or channel
bonding) -- as opposed to a single fiber (or a single wavelength) operating at
the link's rated wire speed.

By chance, is the deserialization you cited earlier, perhaps related to this
inverse muxing process? If so, then that would explain the disconnect, and if it
is so, then one shouldn't despair, because there is a direct path to avoiding 
this.

In parallel optics, e-o processing and o-e processing is intensive at both ends
of the 10G link, respectively. These have the effect of adding more latency than
a single-channel approach would. Yet, most of the TIA activity taking place 
today
that is geared to increasing data rates over in-building fiber links continues 
to
favor multi-mode and the use of parallel optics, as opposed to specifying
single-mode supporting a single channel. But singlemode solutions are also
available to those who dare to be different.

I'll look more closely at these issues and your original exception during the
coming week, since they represent an important aspect in assessing the overall
model. Thanks.

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sat Mar 29 20:30 , Mikael Abrahamsson  sent:


On Sat, 29 Mar 2008, Frank Coluccio wrote:

 Please clarify. To which network element are you referring in connection with
 extended lookup times? Is it the collapsed optical backbone switch, or the
 upstream L3 element, or perhaps both?

I am talking about the matter that the following topology:

server - 5 meter UTP - switch - 20 meter fiber - switch - 20 meter 
fiber - switch - 5 meter UTP - server

has worse NFS performance than:

server - 25 meter UTP - switch - 25 meter UTP - server

Imagine bringing this into metro with 1-2ms delay instead of 0.1-0.5ms.

This is one of the issues that the server/storage people have to deal 
with.

-- 
Mikael Abrahamssonemail: [EMAIL PROTECTED]




Re: cooling door

2008-03-29 Thread Wayne E. Bouchard

On Sat, Mar 29, 2008 at 06:54:02PM +, Paul Vixie wrote:
 
  Can someone please, pretty please with sugar on top, explain the point
  behind high power density? 

Customers are being sold blade servers on the basis that it's much
more efficient to put all your eggs in one basket without being told
about the power or cooling requirements and how not a whole lot of
datacenters really want/are able to support customers installing 15
racks of blade servers in one spot with 4x 230V/30A circuits
each. (Yes, I had that request.)

Customers don't want to pay for the space. They forget that they still
have to pay for the power and that that charge also includes a fee for
the added load on the UPS as well as the AC to get rid of the heat.

While there are advantages to blade servers, a fair number of sales
are to gullable users who don't know what they're getting into, not
those who really know how to get the most out of them. They get sold
on the idea of using blade servers, stick them into SD, Equinix, and
others and suddenly find out that they can only fit 2 in a rack
because of the per-rack wattage limit and end up having to buy the
space anyway. (Wether it's extra racks or extra sq ft or meters, it's
the same problem.)

Under current rules for most 3rd party datacenters, one of the
principle stated advantages, that of much greater density, is
effectively canceled out.

  Increasing power density per sqft will *not* decrease cost, beyond
  100W/sqft, the real estate costs are a tiny portion of total cost. Moving
  enough air to cool 400 (or, in your case, 2000) watts per square foot is
  *hard*.

(Remind me to strap myself to the floor to keep from becoming airborne
by the hurricane force winds while I'm working in your datacenter.)

Not convinved of the first point but experience is limited there. For
the second, I think the practical upper bound for my purposes is
probably between 150 and 200 watts per sq foot. (Getting much harder
once you cross the 150 watt mark.) Beyond that, it gets quite
difficult to supply enough cool air to the cabinet to keep the
equipment happy unless you can guarentee a static load and custom
design for that specific load. (And we all know that will never
happen.) And don't even talk to me about enclosed cabinets at that
point.

 if you do it the old way, which is like you said, moving air, that's always
 true.  but, i'm not convinced that we're going to keep doing it the old way.

One thing I've learned over the various succession of datacenter /
computer room builds and expansions that I've been involved in is that
if you ask the same engineer about the right way to do cooling in
medium and large scale datacenters (15k sq ft and up), you'll probably
get a different oppinion every time you ask the question. There are
several theories of how best to hand this and *none* of them are
right. No one has figured out an ideal solution and I'm not convinced
an ideal solution exists. So we go with what we know works. As people
experiment, what works changes. The problem is that retrofitting is a
bear. (When's the last time you were able to get a $350k PO approved
to update cooling to the datacenter? If you can't show a direct ROI,
the money people don't like you. And on a more practical line, how
many datacenters have you seen where it is physically impossible to
remove the CRAC equipment for replacement without first tearing out
entire rows of racks or even building walls?)

Anyway, my thoughts on the matter.

-Wayne

---
Wayne Bouchard
[EMAIL PROTECTED]
Network Dude
http://www.typo.org/~web/