Re: [CentOS] RAID storage - SATA, SCSI, or Fibre Channel?
On Mon, Aug 20, 2007 at 04:23:49PM -0400, Scott Ehrlich wrote: I have a Dell PowerEdge 2950 and am looking to add more storage. I know a lot of factors can go into the type of answer given, but for present and future technology planning, should I look for a rack of SATA, SCSI, or fibre channel drives?Maybe I'm dating myself with fibre channel, and possibly SCSI? I may be looking to add a few TB now, and possibly more later. If you can afford the bucks, get yourself a storage appliance like the Network Appliance filer. They can do nfs much better than a generic Linux system. NetApps will give you the ability to do nfs and iSCSI-over-ethernet out of the box, and can do CIFS (ie windows SMB sharing) for an additional cost. Depending on the unit you pick they scale much more easilly and much further than a linux system can, and come with practically set-and-forget reliability and support. We've done NetApps for years, from the 700 series, 900 series, and are deploying a baby 270 with 3TB (a single shelf!) that has the potential to grow to 14TB. That said, you _pay_ for all that ability. If cost is a factor (and it rarely isn't) then this is probably more than you will want to spend. -- /\oo/\ / /()\ \ David Mackintosh | [EMAIL PROTECTED] | http://www.xdroop.com pgpL1nypIXZ1P.pgp Description: PGP signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID storage - SATA, SCSI, or Fibre Channel?
SAS all the way. Think of it as fibre channel, that you can afford. You can also mix SATA and SAS disks so really the biggest difference between SATA II and a SAS is the cost of the host bus adapter (HBA). I don't have personal experience with external storage for SAS (expanders) though I haven't heard anything negative about them. You can support 100's of disks in SAS via expanders. Regarding credentials, I work on the adp94xx driver for Stratus Technologies. Though don't take my word as gospel, prove it to yourself that the extra money for the SAS HBA is worth $1000. To give you an idea on performance. [rhod05:37:38]# lspci | grep Adaptec 07:00.0 Serial Attached SCSI controller: Adaptec AIC-9410W SAS (Razor ASIC non-RAID) (rev 09) 7a:00.0 Serial Attached SCSI controller: Adaptec AIC-9410W SAS (Razor ASIC non-RAID) (rev 09) [rhod05:30:59]# lsscsi [12:0:128:0] diskSEAGATE ST373455SS 0002 /dev/sda [12:1:128:0] diskATA ST3250620NS 3.AE /dev/sdb [12:2:128:0] diskATA ST3500630NS 3.AE /dev/sdc [13:0:128:0] diskSEAGATE ST373455SS 0002 /dev/sdd [13:1:128:0] diskATA ST3250620NS 3.AE /dev/sde [13:2:128:0] diskATA ST3500630NS 3.AE /dev/sdf Keep in mind that the disk cache is set to write-through to avoid data corruption if the disk is pulled or the HBA disappears. [rhod05:31:05]# [rhod05:31:05]# hdparm -t -T /dev/sda /dev/sda: Timing cached reads: 9604 MB in 2.00 seconds = 4802.73 MB/sec Timing buffered disk reads: 376 MB in 3.00 seconds = 125.23 MB/sec [rhod05:31:26]# [rhod05:31:26]# hdparm -t -T /dev/sdb /dev/sdb: Timing cached reads: 10576 MB in 2.00 seconds = 5288.80 MB/sec Timing buffered disk reads: 230 MB in 3.01 seconds = 76.42 MB/sec [rhod05:31:45]# [rhod05:31:45]# This version doesn't support NCQ so that's why sdb is such a dog. sda is also a 15K disk where sdb is only 7200 RPM. I haven't tested the upstream driver as to how well NCQ works, I'm still working on RHEL4.5. When it comes to raid we use md (level 1) because of it's flexibility so I couldn't attest to the hardware raid performance of the HBA. If you're interested in the ability to hotplug your HBA, that feature will be available shortly :-). Peter On 8/20/07, Scott Ehrlich [EMAIL PROTECTED] wrote: I have a Dell PowerEdge 2950 and am looking to add more storage. I know a lot of factors can go into the type of answer given, but for present and future technology planning, should I look for a rack of SATA, SCSI, or fibre channel drives?Maybe I'm dating myself with fibre channel, and possibly SCSI? I may be looking to add a few TB now, and possibly more later. What are people using these days? What throughput and reliability are you seeing? What accounts for the cost differences? Thanks. Scott ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- www.alphalinux.org del.icio.us/peter.petrakis ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos