I like the shared socket approach. Building a separate IPMI network seems a lot of extra wiring to me. Admittedly the IPMI switches can be configured to be dirt cheap but it still feels like building a extra tiny road for one car a day when a huge highway with spare capacity exists right next door carrying thousands of cars. (Ok, cheesy analogy!)
Errrr.... you missed all my Beowulf posts about the clashes with the IPMI ports and the ports used for 'rsh' connections on a cluster then? And all the shenanigans with setting sunrpc.min_resvport etc.? Having a separate, simple IPMI network which comes up when you power the racks up has a lot of advantages. 10/100 Netgear switches cost almost nothing, and getting another loom of Cat5 cables configured when the racks are being built is relatively easy. By the way, which hardware do you use? The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. _______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
