RAID1: can't remove (or set-faulty) a disk during resync with mdadm

2006-05-02 Thread David Mansfield

Hi,

I'm running on Centos 4.3 with the latest kernel so perhaps this is a 
'vendor uses old/modified kernel' problem, (kernel is 2.6.9-34.EL) but 
anyway here goes:


I have a degraded mirror.  The rebuild is proceeding with /dev/hda1 
'good' and /dev/hdb1 'syncing'.


I'd like to pull /dev/hdb1 out of the raid and go back to 'degraded' 
mode with no resync.


When I run mdadm --manage -f /dev/md1 /dev/hdb2 it only causes the 
resync to start again from the beginning, it doesn't actually mark it bad.


The same thing happens if there's a write error to /dev/hdb1 during 
resync, instead of failing, it simply restarts the resync.


I imagine the two are related - maybe 'set faulty' simply simulates an 
i/o error on the member, but during resync, the behavior is 'retry'.


Is there anything that can be done about this (other than politely ask 
vendor for a fix ;-)?


David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID1: can't remove (or set-faulty) a disk during resync with mdadm

2006-05-02 Thread Gil
David Mansfield wrote:
 When I run mdadm --manage -f /dev/md1 /dev/hdb2 it only causes the
 resync to start again from the beginning, it doesn't actually mark it bad.

For grins, does mdadm --manage /dev/md1 -f /dev/hdb2 behave
differently?  Or just mdadm /dev/md1 -f /dev/hdb2?

I ran basically the last one on a CentOS 4.3 box not more than a
week ago and it was fine.

--Gil
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 009 of 11] md: Support stripe/offset mode in raid10

2006-05-02 Thread Neil Brown
On Tuesday May 2, [EMAIL PROTECTED] wrote:
 NeilBrown wrote:
  The industry standard DDF format allows for a stripe/offset layout
  where data is duplicated on different stripes. e.g.
 
A  B  C  D
D  A  B  C
E  F  G  H
H  E  F  G
 
  (columns are drives, rows are stripes, LETTERS are chunks of data).
 
 Presumably, this is the case for --layout=f2 ?

Almost.  mdadm doesn't support this layout yet.  
'f2' is a similar layout, but the offset stripes are a lot further
down the drives.
It will possibly be called 'o2' or 'offset2'.

 If so, would --layout=f4 result in a 4-mirror/striped array?

o4 on a 4 drive array would be 

   A  B  C  D
   D  A  B  C
   C  D  A  B
   B  C  D  A
   E  F  G  H
   

 
 Also, would it be possible to have a staged write-back mechanism across 
 multiple stripes?

What exactly would that mean?  And what would be the advantage?

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 009 of 11] md: Support stripe/offset mode in raid10

2006-05-02 Thread Al Boldi
Neil Brown wrote:
 On Tuesday May 2, [EMAIL PROTECTED] wrote:
  NeilBrown wrote:
   The industry standard DDF format allows for a stripe/offset layout
   where data is duplicated on different stripes. e.g.
  
 A  B  C  D
 D  A  B  C
 E  F  G  H
 H  E  F  G
  
   (columns are drives, rows are stripes, LETTERS are chunks of data).
 
  Presumably, this is the case for --layout=f2 ?

 Almost.  mdadm doesn't support this layout yet.
 'f2' is a similar layout, but the offset stripes are a lot further
 down the drives.
 It will possibly be called 'o2' or 'offset2'.

  If so, would --layout=f4 result in a 4-mirror/striped array?

 o4 on a 4 drive array would be

A  B  C  D
D  A  B  C
C  D  A  B
B  C  D  A
E  F  G  H


Yes, so would this give us 4 physically duplicate mirrors?
If not, would it be possible to add a far-offset mode to yield such a layout?

  Also, would it be possible to have a staged write-back mechanism across
  multiple stripes?

 What exactly would that mean?

Write the first stripe, then write subsequent duplicate stripes based on idle 
with a max delay for each delayed stripe.

 And what would be the advantage?

Faster burst writes, probably.

Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html