Re: MD RAID1 performance very different from non-RAID partition

2007-09-15 Thread Iustin Pop
On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
 (Kernel: 2.6.18, x86_64)
 
 Is it normal for an MD RAID1 partition with 1 active disk to perform
 differently from a non-RAID partition?
 
 md0 : active raid1 sda2[0]
   8193024 blocks [2/1] [U_]
 
 I'm building a search engine database onto this partition. All of the
 source data is cached into memory already (i.e., only writes should be
 hitting the disk).
 If I mount the partition as /dev/md0, building the database consistently
 takes 18 minutes.
 If I stop /dev/md0 and mount the partition as /dev/sda2, building the
 database consistently takes 31 minutes.
 
 Why the difference?

Maybe it's because md doesn't support barriers whereas the disks
supports them? In this case some filesystems, for example XFS, will work
faster on raid1 because they can't force the flush to disk using
barriers.

Just a guess...

regards,
iustin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: MD RAID1 performance very different from non-RAID partition

2007-09-15 Thread Goswin von Brederlow
Iustin Pop [EMAIL PROTECTED] writes:

 On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
 (Kernel: 2.6.18, x86_64)
 
 Is it normal for an MD RAID1 partition with 1 active disk to perform
 differently from a non-RAID partition?
 
 md0 : active raid1 sda2[0]
   8193024 blocks [2/1] [U_]
 
 I'm building a search engine database onto this partition. All of the
 source data is cached into memory already (i.e., only writes should be
 hitting the disk).
 If I mount the partition as /dev/md0, building the database consistently
 takes 18 minutes.
 If I stop /dev/md0 and mount the partition as /dev/sda2, building the
 database consistently takes 31 minutes.
 
 Why the difference?

 Maybe it's because md doesn't support barriers whereas the disks
 supports them? In this case some filesystems, for example XFS, will work
 faster on raid1 because they can't force the flush to disk using
 barriers.

 Just a guess...

 regards,
 iustin

Shouldn't it be the other way around? With a barrier the filesystem
can enforce an order on the data written and can then continue writing
data to the cache. More data is queued up for write. Without barriers
the filesystem should do a sync at that point and have to wait for the
write to fully finish. So less is put into cache.

Or not?

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: MD RAID1 performance very different from non-RAID partition

2007-09-15 Thread Iustin Pop
On Sat, Sep 15, 2007 at 02:18:19PM +0200, Goswin von Brederlow wrote:
 Shouldn't it be the other way around? With a barrier the filesystem
 can enforce an order on the data written and can then continue writing
 data to the cache. More data is queued up for write. Without barriers
 the filesystem should do a sync at that point and have to wait for the
 write to fully finish. So less is put into cache.

I don't know in general, but XFS will simply not issue any sync at all
if the block device doesn't support barriers. It's the syadmin's job to
either ensure you have barriers or turn off write cache on disk (see the
XFS faq, for example).

However, I never saw such behaviour from MD (i.e. claiming the write has
completed while the disk underneath is still receiving data to write
from Linux) so I'm not sure this is what happens here. In my experience,
MD acknowledges a write only when it has been pushed to the drive (write
cache enabled or not) and there is no buffer between MD and the drive.

regards,
iustin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: MD RAID1 performance very different from non-RAID partition

2007-09-15 Thread Jordan Russell
Iustin Pop wrote:
 Maybe it's because md doesn't support barriers whereas the disks
 supports them? In this case some filesystems, for example XFS, will work
 faster on raid1 because they can't force the flush to disk using
 barriers.

It's an ext3 partition, so I guess that doesn't apply?

I tried remounting /dev/sda2 with the barrier=0 option (which I assume
disables barriers, looking at the source), though, just to see if it
would make any difference, but it didn't; the database build still took
31 minutes.

-- 
Jordan Russell
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: reducing the number of disks a RAID1 expects

2007-09-15 Thread J. David Beutel

Neil Brown wrote:

2.6.12 does support reducing the number of drives in a raid1, but it
will only remove drives from the end of the list. e.g. if the
state was

  58604992 blocks [3/2] [UU_]

then it would work.  But as it is

  58604992 blocks [3/2] [_UU]

it won't.  You could fail the last drive (hdc8) and then add it back
in again.  This would move it to the first slot, but it would cause a
full resync which is a bit of a waste.
  


Thanks for your help!  That's the route I took.  It worked ([2/2] 
[UU]).  The only hiccup was that when I rebooted, hdd2 was back in the 
first slot by itself ([3/1] [U__]).  I guess there was some contention 
in discovery.  But all I had to do was physically remove hdd and the 
remaining two were back to [2/2] [UU].



Since commit 6ea9c07c6c6d1c14d9757dd8470dc4c85bbe9f28 (about
2.6.13-rc4) raid1 will repack the devices to the start of the
list when trying to change the number of devices.
  


I couldn't find a newer kernel RPM for FC3, and I was nervous about 
building a new kernel myself and screwing up my system, so I went the 
slot rotate route instead.  It only took about 20 minutes to resync (a 
lot faster than trying to build a new kernel).


My main concern was that it would discover an unreadable sector while 
resyncing from the last remaining drive and I would lose the whole 
array.  (That didn't happen, though.)  I looked for some mdadm command 
to check the remaining drive before I failed the last one, to help avoid 
that worst case scenario, but couldn't find any.  Is there some way to 
do that, for future reference?


Cheers,
11011011
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html