On 03/06/2013, at 10:15 AM, Chris Barnes wrote:

> Wow thanks for that Glen.
> 
> Stacks of useful info. Given me a bit more to think about.

Personally, if I were building a cluster of RPis I'd use the serial
console for remote management. The main reason for that is that crash
information gets printed to the console.

I'd pull the RS-232 console pins back to a board, terminate them on the
Prolific RS-232/USB chips, connect those chips using a cascade of USB
hub chips, and present that to the management console (say, the USB port
of another RPi). It would see a /dev/ttyUSB__ per RPi. All this is low
speed stuff, so you could breadboard it.

You'd make all that sensible by using conserver <http://www.conserver.com/>.
Configure conserver to
 - record seen messages from each RPi to syslog.
 - enable the "console ..." command to allow you to connect to a particular
   RPi's console.

You can even set up sshd so that if you SSH to a particular service, then sshd
executes conserver's "console <ssh_service_name>" command, allowing you to SSH
directly to the console of a particular RPi without touching the command line
of the management computer. In fact if you use IPv6 you can give each console
SSH its own IPv6 address.

That in turn means you could use one of the "parallel" SSH clients to issue
commands simultaneously to the consoles of all of the RPis. I wouldn't usually
manage the cluster like using ssh -- that's what Puppet is for -- but it is
very useful all of the same.

The other software you need to know about is collectd. This is how the 
management
platform does capacity planning for all of the machines in the cluster.

In your prototype of the cluster simply use retail parts rather than build a 
board:
 - a RS-232/USB dongle, with the RS-232 interface being 3.3V. For example
 
<https://www.modmypi.com/raspberry-pi-accessories/cables/USB-to-TTL-Serial-Cable-Debug-Console-Cable-for-Raspberry-Pi>
 - a powered USB hub, one from the list at
  <http://elinux.org/RPi_VerifiedPeripherals#Powered_USB_Hubs>

Density-wise, I'd see you building the rack by using a bespoke 2RU shelf
holding 46 RPis, each shelf including a 48 port 10/100 ethernet switch
with 1Gbps uplink. At the top of the rack you'd lose 3RU for the 5V
rectifier (which you'd drop down the rack using Cable TV power cable
and vampire taps into the power bus of each shelf); 1RU for a 24 port
1000 ethernet switch with four 10Gbps ports; 1RU for the management
platform (Intel server with 10GE interface) and its disks. That means
the 45RU rack hold 20 shelves of 46 RPi each, giving a cluster of 920
RPis per rack. Power draw would be about 6,500W per rack. The result
would be about 644,000 BogoMIPS.

For comparison the Supermicro FatTwin and a 10GE switch consumes 5RU,
2,000W for 896,000 BogoMIPS, and that includes 48 spinning disks.

-glen
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to