Re: rack power question

2008-03-31 Thread MARLON BORBA

Do not forget physical security (including, but not limited to, access control 
 surveillance -- different logs, videos, and people to control), 
local/municipal/state laws and regulations (e.g. fire control standards), 
personnel to manage all that sites (even third-party)... IMHO too much 
administrative burden. :-(



Abraços,

Marlon Borba, CISSP, APC DataCenter Associate
Técnico Judiciário - Segurança da Informação
TRF 3 Região
(11) 3012-1683
--
Practically no IT system is risk free.
(NIST Special Publication 800-30)
--
 vijay gill [EMAIL PROTECTED] 31/03/08 2:27 
[...]
On Sun, Mar 23, 2008 at 2:15 PM, [EMAIL PROTECTED] wrote:


 Given that power and HVAC are such key issues in building
 big datacenters, and that fiber to the office is now a reality
 virtually everywhere, one wonders why someone doesn't start
 building out distributed data centers. Essentially, you put
 mini data centers in every office building, possibly by
 outsourcing the enterprise data centers. Then, you have a
 more tractable power and HVAC problem. You still need to
 scale things but it since each data center is roughly comparable
 in size it is a lot easier than trying to build out one
 big data center.


Latency matters. Also, multiple small data centers will be more expensive
than a few big ones, especially if you are planning on average load vs peak
load heat rejection models.

[...]


Re: cooling door

2008-03-31 Thread vijay gill
On Sat, Mar 29, 2008 at 3:04 PM, Frank Coluccio [EMAIL PROTECTED]
wrote:


 Michael Dillon is spot on when he states the following (quotation below),
 although he could have gone another step in suggesting how the distance
 insensitivity of fiber could be further leveraged:


Dillon is not only not spot on, dillon is quite a bit away from being spot
on. Read on.



 The high speed fibre in Metro Area Networks will tie it all together
 with the result that for many applications, it won't matter where
 the servers are.

 In fact, those same servers, and a host of other storage and network
 elements,
 can be returned to the LAN rooms and closets of most commercial buildings
 from
 whence they originally came prior to the large-scale data center
 consolidations
 of the current millennium, once organizations decide to free themselves of
 the
 100-meter constraint imposed by UTP-based LAN hardware and replace those
 LANs
 with collapsed fiber backbone designs that attach to remote switches
 (which could
 be either in-building or remote), instead of the minimum two switches on
 every
 floor that has become customary today.


Here is a little hint - most distributed applications in traditional
jobsets, tend to work best when they are close together. Unless you can map
those jobsets onto truly partitioned algorithms that work on local copy,
this is a _non starter_.



 We often discuss the empowerment afforded by optical technology, but we've
 barely
 scratched the surface of its ability to effect meaningful architectural
 changes.


No matter how much optical technology you have, it will tend to be more
expensive to run, have higher failure rates, and use more power, than simply
running fiber or copper inside your datacenter. There is a reason most
people, who are backed up by sober accountants, tend to cluster stuff under
one roof.


 The earlier prospects of creating consolidated data centers were once
 near-universally considered timely and efficient, and they still are in
 many
 respects. However, now that the problems associated with a/c and power
 have
 entered into the calculus, some data center design strategies are
 beginning to
 look more like anachronisms that have been caught in a whip-lash of
 rapidly
 shifting conditions, and in a league with the constraints that are imposed
 by the
 now-seemingly-obligatory 100-meter UTP design.



Frank, lets assume we have abundant dark fiber, and a 800 strand ribbon
fiber cable costs the same as a utp run. Can you get me some quotes from a
few folks about terminating and patching 800 strands x2?

/vijay




 Frank A. Coluccio
 DTI Consulting Inc.
 212-587-8150 Office
 347-526-6788 Mobile

 On Sat Mar 29 13:57 ,  sent:

 
  Can someone please, pretty please with sugar on top, explain
  the point behind high power density?
 
 It allows you to market your operation as a data center. If
 you spread it out to reduce power density, then the logical
 conclusion is to use multiple physical locations. At that point
 you are no longer centralized.
 
 In any case, a lot of people are now questioning the traditional
 data center model from various angles. The time is ripe for a
 paradigm change. My theory is that the new paradigm will be centrally
 managed, because there is only so much expertise to go around. But
 the racks will be physically distributed, in virtually every office
 building, because some things need to be close to local users. The
 high speed fibre in Metro Area Networks will tie it all together
 with the result that for many applications, it won't matter where
 the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
 trend will make it much easier to place an application without
 worrying about the exact locations of the physical servers.
 
 Back in the old days, small ISPs set up PoPs by finding a closet
 in the back room of a local store to set up modem banks. In the 21st
 century folks will be looking for corporate data centers with room
 for a rack or two of multicore CPUs running XEN, and Opensolaris
 SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.
 
 --Michael Dillon







RE: cooling door

2008-03-31 Thread michael.dillon

 Here is a little hint - most distributed applications in 
 traditional jobsets, tend to work best when they are close 
 together. Unless you can map those jobsets onto truly 
 partitioned algorithms that work on local copy, this is a 
 _non starter_. 

Let's make it simple and say it in plain English. The users
of services have made the decision that it is good enough
to be a user of a service hosted in a data center that is
remote from the client. Remote means in another building in
the same city, or in another city.

Now, given that context, many of these good enough applications
will run just fine if the data center is no longer in one
physical location, but distributed across many. Of course,
as you point out, one should not be stupid when designing such
distributed data centers or when setting up the applications
in them.

I would assume that every data center has local storage available
using some protocol like iSCSI and probably over a separate network
from the external client access. That right there solves most of
your problems of traditional jobsets. And secondly, I am not suggesting
that everybody should shut down big data centers or that every
application
should be hosted across several of these distributed data centers.
There will always be some apps that need centralised scaling. But
there are many others that can scale in a distributed manner, or
at least use distributed mirrors in a failover scenario.

 No matter how much optical technology you have, it will tend 
 to be more expensive to run, have higher failure rates, and 
 use more power, than simply running fiber or copper inside 
 your datacenter. There is a reason most people, who are 
 backed up by sober accountants, tend to cluster stuff under one roof.

Frankly I don't understand this kind of statement. It seems 
obvious to me that high-speed metro fibre exists and corporate 
IT people already have routers and switches and servers in the
building, connected to the metro fiber. Also, the sober accountants
do tend to agree with spending money on backup facilities to
avoid the risk of single points of failure. Why should company A
operate two data centers, and company B operate two data centers,
when they could outsource it all to ISP X running one data center
in each of the two locations (Company A and Company B).

In addition, there is a trend to commoditize the whole data center.
Amazon EC2 and S3 is not the only example of a company who does
not offer any kind of colocation, but you can host your apps out
of their data centers. I believe that this trend will pick up 
steam and that as the corporate market begins to accept running
virtual servers on top of a commodity infrastructure, there is 
an opportunity for network providers to branch out and not only
be specialists in the big consolidated data centers, but also
in running many smaller data centers that are linked by fast metro
fiber.

--Michael Dillon


Re: cooling door

2008-03-31 Thread Deepak Jain




Matthew Petach wrote:

On 3/29/08, Alex Pilosov [EMAIL PROTECTED] wrote:

Can someone please, pretty please with sugar on top, explain the point
 behind high power density?

 Raw real estate is cheap (basically, nearly free). Increasing power
 density per sqft will *not* decrease cost, beyond 100W/sqft, the real
 estate costs are a tiny portion of total cost. Moving enough air to cool
 400 (or, in your case, 2000) watts per square foot is *hard*.

 I've started to recently price things as cost per square amp. (That is,
 1A power, conditioned, delivered to the customer rack and cooled). Space
 is really irrelevant - to me, as colo provider, whether I have 100A going
 into a single rack or 5 racks, is irrelevant. In fact, my *costs*
 (including real estate) are likely to be lower when the load is spread
 over 5 racks. Similarly, to a customer, all they care about is getting
 their gear online, and can care less whether it needs to be in 1 rack or
 in 5 racks.

 To rephrase vijay, what is the problem being solved?


I have not yet found a way to split the ~10kw power/cooling
demand of a T1600 across 5 racks.  Yes, when I want to put
a pair of them into an exchange point, I can lease 10 racks,
put T1600s in two of them, and leave the other 8 empty; but
that hasn't helped either me the customer or the exchange
point provider; they've had to burn more real estate for empty
racks that can never be filled, I'm paying for floor space in my
cage that I'm probably going to end up using for storage rather
than just have it go to waste, and we still have the problem of
two very hot spots that need relatively 'point' cooling solutions.

There are very specific cases where high density power and
cooling cannot simply be spread out over more space; thus,
research into areas like this is still very valuable.




The problem with point heating is often that the hot point is then the 
*intake* for other equipment. If you spread your two T1600s into 10 
racks (i.e. skip 2, drop one, skip 4, drop 1, leaving two at the end) 
your hot point problem is much less of a concern.


If you bought 10 racks... not in a row, but SURROUNDING (in each of the 
rows opposite the cabinets)... Say 12 (a = vacant, b,c = T1600)



abca



You would be doing everyone in your datacenter a service by a) not 
thinking linearly and b) providing adequate sq ft space to dissipate 
your heat.


Deepak


OT: vendor spam

2008-03-31 Thread Bill Nash



Anyone seen spam from Uplogix.com? As it came to this email address, which 
I don't give to vendors for exactly this reason, I'm suspecting it's been 
harvested from this list or maybe c-nsp, the only other list I'm active 
on, for sporadic amounts of active.


Has anyone else seen this? I'd like to verify the source before I add 
nails to the clue-by-four.


- billn


Re: cooling door

2008-03-31 Thread Frank Coluccio

Here is a little hint - most distributed applications in traditional jobsets,
tend to work best when they are close together. Unless you can map those jobsets
onto truly partitioned algorithms that work on local copy, this is a _non 
starter_.

I thought that I had made my view clear in this respect in my earlier postings.
When moving to the kinds of optical extension techniques I outlined earlier in
this thread, I can't be too emphatic in noting that one size does not fit all.

And while I appreciate the hint, you probably don't realize it yet, but in some
ways you are helping to make my argument.

Consider for a moment my initial point (my main point, in fact) concerning 
moving
all of the LAN gear in an enterprise building out to the cloud (customer-owned
data center or colo, machs niches). My contention here is that, this not only
eliminates LAN rooms and all the switches and environmentals that fill them, but
it radically reduces the bulk of the gear normally found in the building's
telecom center, as well, since hierarchical routing infrastructure, and severs 
of
most types that are used for networking purposes (along with their associated
power and air provisions to keep all of them going) would also no longer have a
place onsite, either.

Rather, these elements, too, could be centrally located in an offsite data 
center
(hence reducing their overall number for multi-site networks) _or _in _a _colo _
OR in one of the enterprise's other sites where it makes sense.

OR IN A COLO is especially relevant here, and in some ways related to the 
point
you are arguing. There are many enterprises, in fact, who have already, over
their own dark fibernets (now lit, of course) and leased optical facilities, 
long
since taken the steps to move their server farms and major network node 
positions
to 111-8th, 611 Wilshire, Exodus and scores of other exchange locations around
the world, although most of them, up until now, have not yet taken the next
precipitous step of moving their LAN gear out to the cloud, as well.

Of course when they do decide to free the premises of LAN gear, so too will they
obviate the requirement for many of the routers and associated networking
elements in those buildings, too, thus streamlining L3 route administration
within the intranet, as well.

I should emphasize here that when played properly this is NOT a zero sum game. 
By
the same token, however, what we're discussing here is really a situational 
call.
I grant you, for instance, that some, perhaps many, jobs are best suited to
having their elements sited close to one another. Many, as I've outlined above 
do
not fit this constraint. This is a scalar type of decision process, where the
smallest instance of the fractal doesn't require the same absolute level of
provisioning as the largest, where each is a candidate that must meet a minimum
set of criteria before making full sense.

lets assume we have abundant dark fiber, and a 800 strand ribbon fiber cable
costs the same as a utp run. Can you get me some quotes from a few folks about
terminating and patching 800 strands x2?

This is likely an rhetorical question, although it needn't be. Yes, I can get
those quotes, and quotes that are many times greater in scope, and have for a
number of financial trading floors and outside plant dark nets. It wasn't very
long ago, however, when the same question could still be asked about UTP, since
every wire of every pair during those earlier times required the use of a
soldering iron. My point being, the state of the art of fiber heading,
connectorization and splicing continues to improve all the time, as does the
quality and costing of pre-connectorized jumpers (in the event your question had
to do with jumpers and long cross-conns).

There is a reason most people, who are backed up by sober accountants, tend to
cluster stuff under one roof.

Agreed. Sometimes, however, perhaps quite often in fact, one can attribute this
behavior to a quality known more commonly as bunker mentality.
--
When I closed my preceding message in this subthread I stated I would welcome a
continuation of this discussion offlist with anyone who was interested. Since
then I've received onlist responses, so I responded here in kind, but my earlier
offer still holds.

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile
--

On Mon Mar 31  9:53 , vijay gill  sent:



On Sat, Mar 29, 2008 at 3:04 PM, Frank Coluccio [EMAIL PROTECTED] wrote:



Michael Dillon is spot on when he states the following (quotation below),

although he could have gone another step in suggesting how the distance

insensitivity of fiber could be further leveraged:
Dillon is not only not spot on, dillon is quite a bit away from being spot on.
Read on. 





The high speed fibre in Metro Area Networks will tie it all together

with the result that for many applications, it won't matter where

the servers are.



In fact, those same servers, and a host of other 

Re: cooling door

2008-03-31 Thread Christopher Morrow

On Mon, Mar 31, 2008 at 11:24 AM,  [EMAIL PROTECTED] wrote:

  Let's make it simple and say it in plain English. The users
  of services have made the decision that it is good enough
  to be a user of a service hosted in a data center that is
  remote from the client. Remote means in another building in
  the same city, or in another city.

  Now, given that context, many of these good enough applications
  will run just fine if the data center is no longer in one
  physical location, but distributed across many. Of course,
  as you point out, one should not be stupid when designing such
  distributed data centers or when setting up the applications
  in them.

I think many folks have gotten used to 'my application runs in a
datacenter' (perhaps even a remote datacenter) but they still want
performance from their application. If the DC is 20ms away from their
desktop (say they are in NYC and the DC is in IAD which is about 20ms
distant on a good run) things are still 'snappy'. If the application
uses bits/pieces from a farm of remote  datacenters (also 20ms away or
so so anywhere from NYC to ATL to CHI away from IAD) latency inside
that application is now important... something like a DB heavy app
will really suffer under this scenario if locality of the database's
data isn't kept in mind as well. Making multiple +20ms hops around for
information is really going to impact user experience of the
application I think.

The security model as well would be highly interesting in this sort of
world.. both physical security (line/machine/cage) and information
security (data over the links). This seems to require fairly quick
encryption in a very distributed  envorinment where physical security
isn't very highly assured.


  I would assume that every data center has local storage available
  using some protocol like iSCSI and probably over a separate network
  from the external client access. That right there solves most of
  your problems of traditional jobsets. And secondly, I am not suggesting
  that everybody should shut down big data centers or that every
  application
  should be hosted across several of these distributed data centers.
  There will always be some apps that need centralised scaling. But
  there are many others that can scale in a distributed manner, or
  at least use distributed mirrors in a failover scenario.


ah, like the distributed DR sites financials use? (I've heard of
designs, perhaps from this list even, of distributed DC's 60+ miles
apart with iscsi on fiber between the sites... pushing backup copies
of transaction data to the DR facility) That doesn't help in scenarios
with highly interactive data sets, or lower latency requirements for
applications... I also remember a SAP (I think) installation that got
horribly unhappy with the database/front-end parts a few cities apart
from each other over an 'internal' network...

  In addition, there is a trend to commoditize the whole data center.
  Amazon EC2 and S3 is not the only example of a company who does
  not offer any kind of colocation, but you can host your apps out
  of their data centers. I believe that this trend will pick up

asp's were a trend in the late 90's, for some reason things didn't
work out then (reason not really imporant now). Today/going-forward
some things make sense to outsource in this manner, I'm not sure that
customer critical data or data with high change-rates are it though,
certainly nothing that's critical to your business from an IP
perspective, at least not without lots of security thought/controls.

When working at a large networking company we found it really hard to
get people to move their applications out from under their desk (yes,
literally) and into a production datacenter... even with offers of
mostly free hardware and management of systems (so less internal
budget used). Some of that was changing when I left, but certainly not
quickly.

it's an interesting proposition, and the DC's in question were owned
by the company in question, I'm not sure about moving off to another
company's facilities though... scary security problems result.

-Chris