I really wouldn't call myself a RAID expert, however I imagine that on a
RAID5 system, random writes are going to have a performance penalty as
parities will have to be recalculated based upon the change of, say, 1 out
of the 4 blocks already on disk - and to calculate the new parity, the other
3 blocks will have to read again, then 2 blocks written - the new block plus
the new parity. RAID 10 has no parity to worry about, so it simply writes
the data to the appropriate disks and that's the end. A sequential write on
RAID 5 shouldn't have this problem as parity can be calculated in memory
before anything is written.

I'm really not too sure about the increase in sequential writes, however I
imagine that this could very well be due to the disks being on the one bus,
where they were on 2 previously. Can you try putting disks on different
busses in the RAID 10 system? I think the best way is to have 1 set of
stripes on each bus, so your mirrors are on different busses.

On 2/23/07, Craig Dibble <[EMAIL PROTECTED]> wrote:

Hi all,

I have a question for any hardware experts out there: I'm currently
scratching my head over an unexpected performance issue with a relative
monster of a new machine compared to it's older, supposed to be
superseded counterpart.

Brief outline:

Server A: 2 x 3Ghz Xeon (with hyperthreading shows as 4 CPUs)
          2GB Ram
          900GB RAID5 array comprised of 8x146GB 7500rpm disks on 2
spindles
with a stripe size of 64k

Server B: 4 x Dual Core 2.66Ghz Xeon (with HT shows as 16 CPUs)
          3GB Ram
          900GB RAID1+0 comprised of 6x300GB 10k rpm disks on 1 spindle
with a
stripe size of 128k

Running write tests on both boxes and comparing sequential versus random
writes shows some very unusual results that I'm having trouble
interpreting - the test program creates a 1GB file, then writes to it
again in 8k chunks, running each test twice to attempt to counter any
issues with the state of the controller cache, so the second result
should be a more realistic, or 'better' number:

Server A:

Fri Feb 23 03:00:01 EST 2007

sequentialWrite: 28.96 seconds
sequentialWrite: 28.88 seconds
randomWrite: 659.32 seconds
randomWrite: 701.60 seconds

Server B:

Fri Feb 23 03:00:01 EST 2007

sequentialWrite: 52.76 seconds
sequentialWrite: 41.32 seconds
randomWrite: 81.39 seconds
randomWrite: 82.20 seconds


What I can't explain here is why a sequential write on the new box would
take roughly 1.5 to 2 times as long as on the old box, yet the random
write is around 8 to 8.5 times faster.

Has anyone seen anything like this in the past and would care to hit me
with a cluestick about how I might fix it? Is it possible that a
combination of the stripe size and the single spindle on the new box
could be slowing it down to this extent, or is there something else I am
missing?

TIA for any sage counsel,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to