What I found more interesting was the origins of this... a software service 
vendor (Facebook) sponsoring the development of a hardware platform for their 
use.  Interesting.

I don't know how useful the "open" aspect is.  The startup costs to make a 
batch of mobos are fairly high, so it's not a "small volume" kind of 
proposition, and I'm sure that for any volume at all, you could get the 
reference design from the processor mfr and start there.  For all I know, the 
reference design isn't "closed source".

I thoroughly agree with Bill on the fans and such.  Maybe in their target 
application, they've got external fans or ducting.  They've already moved away 
from the 19" RETMA rack.. Maybe they're going to the Lux Baking Sheet packaging 
model.  

16.5 x 16" form factor would fit nicely on the end of a 26x18" sheet pan.  The 
inside dimensions of a standard sheet pan are about 16.5" minimum (depending on 
how much the sides slope in and how big a bead there is.  That would leave you 
about 8" for your disk drives.

The Facebook Open Rack V1 is 23.6" wide and 42" deep.   A big sheet pan rack is 
23-26"W and 30-39" deep .   Facebook open rack has adjustable ledges upon which 
the computers sit. Some bakery racks have adjustable ledges upon which the 
sheet pans sit.

  Coincidence?  I think not.<grin>

If we find out that they are using double stick foam tape to hold the 
motherboard down instead of fasteners, then we'll know.. 
you read it here first on the Beowulf list.. 
(actually, it says "It is completely screw-less," and the pdf of the chassis 
says it is 18.9x23.38 inches.. a bit different than a standard sheet pan)

The airflow spec says 12-106 CFM at the server level, and 1116-9324 CFM for  a 
whole rack with 90 servers, stacked 30 high and 3 wide.  
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Server_Chassis_and_Triplet_v1.0.pdf


But really, for a high density system, separating the fans from the 
electronics, when you're already going to have 20 or so shelves in the rack 
just makes sense.  You're never going to try and pull out one system while 
leaving the rest running.  In these kinds of massive shared load datacenter 
apps, why not just let the systems die, and accept the slight degradation as 
your shipping container with 2000 processors slowly turns into a 1999, then a 
1998, then a 1997 processor machine.  Maybe once every couple of months, you 
send someone through to shut down a whole rack and replace the failed system 
within it, and then bring it up again.




Jim Lux

-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of Bill Broadley
Sent: Wednesday, January 16, 2013 12:52 PM
To: [email protected]
Subject: Re: [Beowulf] AMD Roadrunner open compute motherboard

On 01/16/2013 10:20 AM, Hearns, John wrote:
> http://www.theregister.co.uk/2013/01/16/amd_roadrunner_open_compute_mo
> therboard/

The pictured 1U has what looks like 12 15k RPM fans (not including the power 
supplies).  Or 6 double fans if you prefer.  In my experience those fans burn 
an impressive amount of power (40-60 watts), make an impressive amount of 
noise, and introduce substantial vibration into the chassis.  That might be 
worth it, but they don't actually move much air.

I'd much rather have a quad node 2U chassis (with 2U fans).  When compared to 
the picture 1U you get:
* 1/4th the power cables per node
* less chassis per node
* efficient fans/airflow
* better power efficiency
* a price that's often less than the four 1 U nodes.

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing To 
change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to