Hi, On Wed, 28 Oct 2009, Brian A. Seklecki wrote:
> All: > > Here are a few quick notes & photos on ripping the PERC6 out of an r710 > and replacing it with an Areca ARC-1680IX-12 : > > Photos: http://digitalfreaks.org/~lavalamp/cp/thumbnails.php?album=52 > > > 1) The PERC6 performance is really poor. It's really slow to > write on any RAID level. In RAID5, it averages writes ~30-50 > MBps where as Areca cards average 300-400 MBps > > - It's even faster with ZFS on FreeBSD/amd64 RELENG_8 and some > sysctl tweaks > > - The management interface is horrible, the documentation for > the proprietary CLI is horrible, and after 10 years, its yet > to be integrated into the IPMI BMC > > - It's probably not Dell's fault. LSI/QLogic makes the > chip; blame them (But it's Dell that takes it in the hilt, > repeatably, with every generation of server, when they > renew the contract) > > I used to get faster disk I/O in my SBUS QLogic FAS408 in my > SparcStation 20. > > Anyway, you get what you pay for with Dell;.. but you can get a > lot more w/ Areca...for what you pay for with Dell!? > > 2) The latest r710 and PERC6 use the industry standard SAS SFF-8087 > internal cable connector between the HBA and the backplane. > > That means you can just swap out the HBA, or if you're one of > Dell's big embedded clients, order the unit w/o PERC6 or have > Dell ship you whatever you want (3Ware?), probably. > > 3) Installation of the Areca ARC-1680IX-12 > > We used a PCIe x8 SAS RAID Card 2/512mb Cache. We purchased it > off of NewEgg.com for appropriately the same price as a PERC6 adds > to an r710. The LCD monitor and battery put it slightly over. > > The unit is PCIE-8x and there's 512 of DDR Cache onboard, 2GB > DDR addon, with up to 4GB addon (PERC can't compete here). > > The card has it's own Intel IOP348 1200MHz CPU, an > IPv4-enabled firmware (Web, SSH, Telnet, SNMP, SMTP), and > very very decent mgmnt F/OSS support. > > You can see photos of the ARC-1680X-12 in Pictures 10, 13, 14. > > The external connectors on the card are: > - RS232 over RJ11/RJ14 (The included cable terminates to DB9M) > - Ethernet management > - External SAS SFF-8088 > > 3.1) Installation Notes > > In pictures 3 and 4, you can see the Dell SFF-8087 cables from > the PERC6 terminating into the backplane. The cables run along > a raceway on the right side of the case (oriented look at the > faceplate) > > In pictures 1 and 2, if you remove the CPU/RAM cover and front > fan bank, you can see the cables in the raceway. > > Trace them back to the PERC6 and disconnect them from the HBA > (Dell used a proprietary ribbon connector on the PERC6 side; > good thinking Dell! Look at how well proprietary worked for > IBM, Sun, etc.). This connector can be seen in picture 6 and 12. > . > Pull the cables out of the raceway, then disconnect the SFF-8087 > from the SAS backplane. > > As you can see in picture 6, the PERC6 is secured in place in > a special PCIE port retainer that reminds me MCA or EISA cards > in my PS/2 servers. > > The retainer is a T-16 hex nut head, as see in picture 8. Failing > that, use an acetylene welder or plasma torch. > > Install your Areca card on the top PCI-E 8x/16x port (picture 15) > > Install the SFF-SFF cable, included with the Areca, as seen > in picture into the r710 backplane raceway. See picture 16 > and 20 > > You may need to run multiple SFF cables depending on your > backplane configuration. > > Note: 90 degree angled cables would be best. Dell has the > custom made apparently, I can't find them on the > Interwebs, so I carefully bend the SFF connector. > > Restore the fan array and CPU/RAM cover. > > Note: Photo 21 the SAS/SFF cable goes above the cover, so > tuck the cover under the cable (90deg cable > mitigates this) > > Final cable routing seen in Picture 22. > > Restore case and experience an instantaneous "I/O'gasm' as > your $16k server screams to life. > > Did I mention that the Areca has volume management > built into it? >:} > > Walk directly to the local bar and buy everyone a few > rounds with the money you saved by having a few fast > severs instead a datacenter full of them trying to keep > up with the Slony backlog. > > Good luck and let me know if you have any questions (or where to > find some slick SFF-8087 cables with a 90deg angle connector) > > You can see a dmesg(8) for the r710 w/ areca for NetBSD/amd64 > -current from last month at: > > http://www.nycbug.org/?NAV=dmesgd;f_dmesg=;f_bsd=;f_nick=;f_descr=;dmesgid=2016#2016 Splendid. Brian, you are one of the greatest. Really. Thanks for this highlight. Viele Gruesse Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org) -- Eberhard Moenkeberg Arbeitsgruppe IT-Infrastruktur E-Mail: emoe...@gwdg.de Tel.: +49 (0)551 201-1551 ------------------------------------------------------------------------- Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG) Am Fassberg 11, 37077 Goettingen URL: http://www.gwdg.de E-Mail: g...@gwdg.de Tel.: +49 (0)551 201-1510 Fax: +49 (0)551 201-2150 Geschaeftsfuehrer: Prof. Dr. Bernhard Neumair Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe Sitz der Gesellschaft: Goettingen Registergericht: Goettingen Handelsregister-Nr. B 598 ------------------------------------------------------------------------- _______________________________________________ Linux-PowerEdge mailing list Linux-PowerEdge@dell.com https://lists.us.dell.com/mailman/listinfo/linux-poweredge Please read the FAQ at http://lists.us.dell.com/faq