On Fri, 15 Oct 1999, Marc SCHAEFER wrote:

> In article <[EMAIL PROTECTED]> you 
>wrote:
> > 600$ you don't get a performance boost but more reliability. So either pay
> > more for the hardware solution or use software raid and pay the time you
> > spend to set it up.
> 
> If you have the time, could you do the same test with software RAID ?
> 
  Finally we found the time to install the Software raid. Using the howto
it was really easy to set it up and get it running. And until now it works
very well and performance it significantly faster than the hardware raid
controllers.
  For comparison I repeat the hardware results:
With the DPT PM1554U2 (low end version, 4 MB ram, 3 channels):
 1*1024  3699 55.7  4967  6.2  3547  8.1  6304 87.8 19943 19.8 270.8  7.6
 1*1024  3881 50.2  4238  5.3  3538  7.9  5688 78.9 19873 19.4 344.4  9.6
 1*1024  3833 49.1  4941  6.3  3590  8.0  6120 84.8 19831 19.3 294.9  9.1
 1*1024  4702 72.2  4955  6.3  3058  6.8  6136 84.8 20286 19.2 288.4  7.4

With the AMI Megaraid (system was under some load):
 1*1024  5426 83.6 13444 19.6  4958 15.0  4735 69.8 11212 16.7 121.8  4.9
 1*1024  5437 83.9 12931 19.4  4345 13.7  5383 77.9 10867 15.2 121.7  5.3
 1*1024  4955 76.2 13613 20.2  4371 13.5  4554 67.2 12902 18.9 178.7  6.0
 1*1024  5726 87.3 13916 21.2  5097 15.6  5864 85.7 19635 28.2 158.0  6.4

  Software Raid5 using 3 18GB IBM disks connected to the different
channels of the same DPT 1554U2W Raid controller:
 1*1024  6193 94.1 12157 13.3  6360 17.1  6317 89.4 18319 21.1 284.4  7.9
 1*1024  6246 95.4 12064 13.3  6324 16.9  6320 89.4 18332 20.2 274.0  8.1
 1*1024  6266 95.5 11986 13.1  6241 16.5  6297 89.2 18225 21.3 292.3  7.9
 1*1024  6257 95.3 11727 11.4  6212 16.5  4835 71.2 11707 15.6 161.5  5.4
That is already quite a bit faster than the hardware raid controller.

  Software Raid5 using 3 18GB IBM disks all connected to one Adaptec
AHA2940U2W LVD controller:
        -------Sequential Output-------- ---Sequential Input-- --Random--
        -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1*1024  6290 95.3 15145 17.3  8147 21.3  6430 90.6 22353 25.1 296.4  9.0
 1*1024  6332 96.2 14693 16.3  8109 21.5  6433 90.9 21826 24.9 325.3  9.7
 1*1024  6335 96.1 14739 16.7  8075 21.0  6422 90.6 22373 24.4 325.3  9.2
 1*1024  6305 96.0 14912 16.9  8129 21.1  6423 90.6 22471 25.2 342.8  9.9
This suggests that the DPT controller itself is not very efficient and
that even the Megaraid cannot beat a single controller software raid setup
(at least if you only use 3 disks). It might be a different stroy if you
use more than four disks and thus saturate the scsi bus. The read values
are what you would expect (3 disks = 2*single) but the write overhead
seams to be significant. Is that to be expected?

With a four disk setup you get a significant performance increase for
writing but not very much for reading (again software raid with one
AHA2940U2W LVD):
 1*1024  6289 97.2 29202 34.1 11567 30.0  6742 95.2 26381 29.1 423.5 11.8
 1*1024  6270 97.0 25203 30.0 11715 30.3  6926 97.3 25459 28.2 378.9 10.2
 1*1024  6291 97.3 25963 31.1 11517 30.1  6940 97.6 25859 27.7 415.3 12.7
 1*1024  6270 97.0 26813 33.0 11724 30.6  6747 95.1 26227 27.9 314.0  7.7
Shouldn't the performance increase somewhat linearly with the number of
disks up to a certain saturation close to what the controller can handle?

As a reference the single disk (non raided) performance:
 1*1024  4787 77.6  9323 18.8  3608 11.6  5503 79.2  9899 13.3 124.6  4.8
 1*1024  5455 90.9 11316 17.6  3626 11.7  5196 74.6 10308 13.9 128.9  4.2
 1*1024  5426 90.4 11416 22.8  3627 12.0  5534 79.8 10265 13.3 125.4  4.8
 1*1024  5454 91.2  9476 18.7  3581 11.6  5410 77.4 10300 13.9 135.1  4.9

Klaus

 +#$*&^$>@$+)$@(*&@$>@+@)$)(&*@$>@$+@$)_*@$>@$+@$*)@>@$+_)@<@)(*$^@&@>?@!+@)$)(
 Klaus Schroer                                                  [EMAIL PROTECTED]
 Biology Department                                           
 Brookhaven Nat'l Lab                                   

Reply via email to