On 04.03.2013 11:46, Matt Lawrence wrote:
This is a bit of a rant, so adjust your filters accordingly.

I'm currently doing some work not really production datacenter
(unless you ask the developers) that has a variety of systems.  Some
of the systems I'm dealing with are the 4 servers in 2U variety.  It's
a neat idea, but great care needs to be taken to avoid problems.  One
of the big issues is cable density, the systems I'm managing have a
BMC connection, 2x 1Gb connections, 2x 10Gb connections and a KVM
dongle.  That's 7 things plugged into the back of each server (KVM is
VGA + USB).  Multiply times 4 and that's 28 cables per 2U, plus 2x
power cables.  That's a lot of cables and they aren't running in neat
rows like a 48 port switch.

Adding to the problem is the fact that the disks plug in to the front
of the system and the server electronics plug in from the back of the
server. Right through that rat's nest of cabling.  It's a challenge.
If you consider yourself ok at cabling, you don't have anywhere near
the skills to do cabling at this density.  Typical cabling standards
are not adequate for these kinds of setups.  Mediocre cabling also
really blocks the air flow, the systems I'm dealing with are nice and
toasty at the back of the cabinet.

Another option I'm dealing with are blade enclosures.  They manage to
get 16 servers into 10U of rack space and (at least here) they have
switches built in.  This means I only have 7 network cables running to
a top of rack switch/patch panel.  So much easier to deal with.  The
blades are accessable via the front of the rack, which is also much
easier.  The enclosures have built in management, which again makes
things easier.  A downside is that certain failures require taking
down the entire encloser to fix, so you lose 16 servers instead of the
4 in the other kind of high density server.  I have never ben a big
fan of blade enclosures, but I'm starting to come around.

Of course, one issue that too few people think about until it is too
late is the issue of power density and cooling capacity.  Being able
to put 4 servers in 2U sounds really nifty until you discover you can
only power and cool half a rack of them.

This concludes my rant for today.  Maybe.

Last time I evaluated going with the high density chassis (both 4x1 and
blades), I found that unless I went into a brand new data center that
had enough power to light a city, I was limited to about 16 actual
servers per rack. ("actual server" = 1 blade or 1 of 4 modules, but not
7 of 9...)

The overall ROI (with vendor discounts) along with ease of maintenance
in a lights out data center (and no geo local labor) pointed
overwhelmingly at 1U single servers.

This evaluation was done about 1.5 years ago.

--
Mr. Flibble
King of the Potato People
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/

Reply via email to