Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz

How many drives actually failed?

Failed Devices : 1



On Tue, 19 Feb 2008, Norman Elton wrote:


So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?

This is all a test system, so I can dink around as much as necessary.
Thanks for any advice!

Norman Elton

== OUTPUT OF MDADM =

   Version : 00.90.03
 Creation Time : Fri Jan 18 13:17:33 2008
Raid Level : raid5
Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
   Device Size : 976759936 (931.51 GiB 1000.20 GB)
  Raid Devices : 8
 Total Devices : 7
Preferred Minor : 4
   Persistence : Superblock is persistent

   Update Time : Mon Feb 18 11:49:13 2008
 State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 1
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

  UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
Events : 0.110

   Number   Major   Minor   RaidDevice State
  0  6610  active sync   /dev/sdag1
  1  66   171  active sync   /dev/sdah1
  2  66   332  active sync   /dev/sdai1
  3  66   493  active sync   /dev/sdaj1
  4  66   654  active sync   /dev/sdak1
  5   005  removed
  6   006  removed
  7  66  1137  active sync   /dev/sdan1

  8  66   97-  faulty spare   /dev/sdam1
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How many drives are bad?

2008-02-19 Thread Norman Elton
So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?

This is all a test system, so I can dink around as much as necessary.
Thanks for any advice!

Norman Elton

== OUTPUT OF MDADM =

Version : 00.90.03
  Creation Time : Fri Jan 18 13:17:33 2008
 Raid Level : raid5
 Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
Device Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 4
Persistence : Superblock is persistent

Update Time : Mon Feb 18 11:49:13 2008
  State : clean, degraded
 Active Devices : 6
Working Devices : 6
 Failed Devices : 1
  Spare Devices : 0

 Layout : left-symmetric
 Chunk Size : 64K

   UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
 Events : 0.110

Number   Major   Minor   RaidDevice State
   0  6610  active sync   /dev/sdag1
   1  66   171  active sync   /dev/sdah1
   2  66   332  active sync   /dev/sdai1
   3  66   493  active sync   /dev/sdaj1
   4  66   654  active sync   /dev/sdak1
   5   005  removed
   6   006  removed
   7  66  1137  active sync   /dev/sdan1

   8  66   97-  faulty spare   /dev/sdam1
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz

Neil,

Is this a bug?

Also, I have a question for Norman-- how come your drives are sda[a-z]1? 
Typically it is /dev/sda1 /dev/sdb1 etc?


Justin.

On Tue, 19 Feb 2008, Norman Elton wrote:

But why do two show up as "removed"?? I would expect /dev/sdal1 to show up 
someplace, either active or failed.


Any ideas?

Thanks,

Norman



On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:


How many drives actually failed?

Failed Devices : 1



On Tue, 19 Feb 2008, Norman Elton wrote:


So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?

This is all a test system, so I can dink around as much as necessary.
Thanks for any advice!

Norman Elton

== OUTPUT OF MDADM =

 Version : 00.90.03
Creation Time : Fri Jan 18 13:17:33 2008
  Raid Level : raid5
  Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
 Device Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 4
 Persistence : Superblock is persistent

 Update Time : Mon Feb 18 11:49:13 2008
   State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 1
Spare Devices : 0

  Layout : left-symmetric
  Chunk Size : 64K

UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
  Events : 0.110

 Number   Major   Minor   RaidDevice State
0  6610  active sync   /dev/sdag1
1  66   171  active sync   /dev/sdah1
2  66   332  active sync   /dev/sdai1
3  66   493  active sync   /dev/sdaj1
4  66   654  active sync   /dev/sdak1
5   005  removed
6   006  removed
7  66  1137  active sync   /dev/sdan1

8  66   97-  faulty spare   /dev/sdam1
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many drives are bad?

2008-02-19 Thread Norman Elton
But why do two show up as "removed"?? I would expect /dev/sdal1 to  
show up someplace, either active or failed.


Any ideas?

Thanks,

Norman



On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:


How many drives actually failed?

Failed Devices : 1



On Tue, 19 Feb 2008, Norman Elton wrote:


So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?

This is all a test system, so I can dink around as much as necessary.
Thanks for any advice!

Norman Elton

== OUTPUT OF MDADM =

  Version : 00.90.03
Creation Time : Fri Jan 18 13:17:33 2008
   Raid Level : raid5
   Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
  Device Size : 976759936 (931.51 GiB 1000.20 GB)
 Raid Devices : 8
Total Devices : 7
Preferred Minor : 4
  Persistence : Superblock is persistent

  Update Time : Mon Feb 18 11:49:13 2008
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 1
Spare Devices : 0

   Layout : left-symmetric
   Chunk Size : 64K

 UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
   Events : 0.110

  Number   Major   Minor   RaidDevice State
 0  6610  active sync   /dev/sdag1
 1  66   171  active sync   /dev/sdah1
 2  66   332  active sync   /dev/sdai1
 3  66   493  active sync   /dev/sdaj1
 4  66   654  active sync   /dev/sdak1
 5   005  removed
 6   006  removed
 7  66  1137  active sync   /dev/sdan1

 8  66   97-  faulty spare   /dev/sdam1
-
To unsubscribe from this list: send the line "unsubscribe linux- 
raid" in

the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many drives are bad?

2008-02-19 Thread Norman Elton
Justin,

This is a Sun X4500 (Thumper) box, so it's got 48 drives inside.
/dev/sd[a-z] are all there as well, just in other RAID sets. Once you
get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc.

I'd be curious if what I'm experiencing is a bug. What should I try to
restore the array?

Norman

On 2/19/08, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> Neil,
>
> Is this a bug?
>
> Also, I have a question for Norman-- how come your drives are sda[a-z]1?
> Typically it is /dev/sda1 /dev/sdb1 etc?
>
> Justin.
>
> On Tue, 19 Feb 2008, Norman Elton wrote:
>
> > But why do two show up as "removed"?? I would expect /dev/sdal1 to show up
> > someplace, either active or failed.
> >
> > Any ideas?
> >
> > Thanks,
> >
> > Norman
> >
> >
> >
> > On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:
> >
> >> How many drives actually failed?
> >>> Failed Devices : 1
> >>
> >>
> >> On Tue, 19 Feb 2008, Norman Elton wrote:
> >>
> >>> So I had my first "failure" today, when I got a report that one drive
> >>> (/dev/sdam) failed. I've attached the output of "mdadm --detail". It
> >>> appears that two drives are listed as "removed", but the array is
> >>> still functioning. What does this mean? How many drives actually
> >>> failed?
> >>>
> >>> This is all a test system, so I can dink around as much as necessary.
> >>> Thanks for any advice!
> >>>
> >>> Norman Elton
> >>>
> >>> == OUTPUT OF MDADM =
> >>>
> >>>  Version : 00.90.03
> >>> Creation Time : Fri Jan 18 13:17:33 2008
> >>>   Raid Level : raid5
> >>>   Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
> >>>  Device Size : 976759936 (931.51 GiB 1000.20 GB)
> >>> Raid Devices : 8
> >>> Total Devices : 7
> >>> Preferred Minor : 4
> >>>  Persistence : Superblock is persistent
> >>>
> >>>  Update Time : Mon Feb 18 11:49:13 2008
> >>>State : clean, degraded
> >>> Active Devices : 6
> >>> Working Devices : 6
> >>> Failed Devices : 1
> >>> Spare Devices : 0
> >>>
> >>>   Layout : left-symmetric
> >>>   Chunk Size : 64K
> >>>
> >>> UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
> >>>   Events : 0.110
> >>>
> >>>  Number   Major   Minor   RaidDevice State
> >>> 0  6610  active sync   /dev/sdag1
> >>> 1  66   171  active sync   /dev/sdah1
> >>> 2  66   332  active sync   /dev/sdai1
> >>> 3  66   493  active sync   /dev/sdaj1
> >>> 4  66   654  active sync   /dev/sdak1
> >>> 5   005  removed
> >>> 6   006  removed
> >>> 7  66  1137  active sync   /dev/sdan1
> >>>
> >>> 8  66   97-  faulty spare   /dev/sdam1
> >>> -
> >>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >>> the body of a message to [EMAIL PROTECTED]
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz

Norman,

I am extremely interested in what distribution you are running on it and 
what type of SW raid you are employing (besides the one you showed here), 
are all 48 drives filled, or?


Justin.

On Tue, 19 Feb 2008, Norman Elton wrote:


Justin,

This is a Sun X4500 (Thumper) box, so it's got 48 drives inside.
/dev/sd[a-z] are all there as well, just in other RAID sets. Once you
get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc.

I'd be curious if what I'm experiencing is a bug. What should I try to
restore the array?

Norman

On 2/19/08, Justin Piszcz <[EMAIL PROTECTED]> wrote:

Neil,

Is this a bug?

Also, I have a question for Norman-- how come your drives are sda[a-z]1?
Typically it is /dev/sda1 /dev/sdb1 etc?

Justin.

On Tue, 19 Feb 2008, Norman Elton wrote:


But why do two show up as "removed"?? I would expect /dev/sdal1 to show up
someplace, either active or failed.

Any ideas?

Thanks,

Norman



On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:


How many drives actually failed?

Failed Devices : 1



On Tue, 19 Feb 2008, Norman Elton wrote:


So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?

This is all a test system, so I can dink around as much as necessary.
Thanks for any advice!

Norman Elton

== OUTPUT OF MDADM =

 Version : 00.90.03
Creation Time : Fri Jan 18 13:17:33 2008
  Raid Level : raid5
  Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
 Device Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 4
 Persistence : Superblock is persistent

 Update Time : Mon Feb 18 11:49:13 2008
   State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 1
Spare Devices : 0

  Layout : left-symmetric
  Chunk Size : 64K

UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
  Events : 0.110

 Number   Major   Minor   RaidDevice State
0  6610  active sync   /dev/sdag1
1  66   171  active sync   /dev/sdah1
2  66   332  active sync   /dev/sdai1
3  66   493  active sync   /dev/sdaj1
4  66   654  active sync   /dev/sdak1
5   005  removed
6   006  removed
7  66  1137  active sync   /dev/sdan1

8  66   97-  faulty spare   /dev/sdam1
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html





-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many drives are bad?

2008-02-19 Thread Norman Elton
Justin,

There was actually a discussion I fired off a few weeks ago about how
to best run SW RAID on this hardware. Here's the recap:

We're running RHEL, so no access to ZFS/XFS. I really wish we could do
ZFS, but no luck.

The box presents 48 drives, split across 6 SATA controllers. So disks
sda-sdh are on one controller, etc. In our configuration, I run a
RAID5 MD array for each controller, then run LVM on top of these to
form one large VolGroup.

I found that it was easiest to setup ext3 with a max of 2TB
partitions. So running on top of the massive LVM VolGroup are a
handful of ext3 partitions, each mounted in the filesystem. This less
than ideal (ZFS would allow us one large partition), but we're
rewriting some software to utilize the multi-partition scheme.

In this setup, we should be fairly protected against drive failure. We
are vulnerable to a controller failure. If such a failure occurred,
we'd have to restore from backup.

Hope this helps, let me know if you have any questions or suggestions.
I'm certainly no expert here!

Thanks,

Norman

On 2/19/08, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> Norman,
>
> I am extremely interested in what distribution you are running on it and
> what type of SW raid you are employing (besides the one you showed here),
> are all 48 drives filled, or?
>
> Justin.
>
> On Tue, 19 Feb 2008, Norman Elton wrote:
>
> > Justin,
> >
> > This is a Sun X4500 (Thumper) box, so it's got 48 drives inside.
> > /dev/sd[a-z] are all there as well, just in other RAID sets. Once you
> > get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc.
> >
> > I'd be curious if what I'm experiencing is a bug. What should I try to
> > restore the array?
> >
> > Norman
> >
> > On 2/19/08, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> >> Neil,
> >>
> >> Is this a bug?
> >>
> >> Also, I have a question for Norman-- how come your drives are sda[a-z]1?
> >> Typically it is /dev/sda1 /dev/sdb1 etc?
> >>
> >> Justin.
> >>
> >> On Tue, 19 Feb 2008, Norman Elton wrote:
> >>
> >>> But why do two show up as "removed"?? I would expect /dev/sdal1 to show up
> >>> someplace, either active or failed.
> >>>
> >>> Any ideas?
> >>>
> >>> Thanks,
> >>>
> >>> Norman
> >>>
> >>>
> >>>
> >>> On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:
> >>>
>  How many drives actually failed?
> > Failed Devices : 1
> 
> 
>  On Tue, 19 Feb 2008, Norman Elton wrote:
> 
> > So I had my first "failure" today, when I got a report that one drive
> > (/dev/sdam) failed. I've attached the output of "mdadm --detail". It
> > appears that two drives are listed as "removed", but the array is
> > still functioning. What does this mean? How many drives actually
> > failed?
> >
> > This is all a test system, so I can dink around as much as necessary.
> > Thanks for any advice!
> >
> > Norman Elton
> >
> > == OUTPUT OF MDADM =
> >
> >  Version : 00.90.03
> > Creation Time : Fri Jan 18 13:17:33 2008
> >   Raid Level : raid5
> >   Array Size : 6837319552 (6520.58 GiB 7001.42 GB)
> >  Device Size : 976759936 (931.51 GiB 1000.20 GB)
> > Raid Devices : 8
> > Total Devices : 7
> > Preferred Minor : 4
> >  Persistence : Superblock is persistent
> >
> >  Update Time : Mon Feb 18 11:49:13 2008
> >State : clean, degraded
> > Active Devices : 6
> > Working Devices : 6
> > Failed Devices : 1
> > Spare Devices : 0
> >
> >   Layout : left-symmetric
> >   Chunk Size : 64K
> >
> > UUID : b16bdcaf:a20192fb:39c74cb8:e5e60b20
> >   Events : 0.110
> >
> >  Number   Major   Minor   RaidDevice State
> > 0  6610  active sync   /dev/sdag1
> > 1  66   171  active sync   /dev/sdah1
> > 2  66   332  active sync   /dev/sdai1
> > 3  66   493  active sync   /dev/sdaj1
> > 4  66   654  active sync   /dev/sdak1
> > 5   005  removed
> > 6   006  removed
> > 7  66  1137  active sync   /dev/sdan1
> >
> > 8  66   97-  faulty spare   /dev/sdam1
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to [EMAIL PROTECTED]
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>>
> >>
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to [EMAIL PROTECTED]
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: How many drives are bad?

2008-02-19 Thread Steve Fairbairn

> 
> The box presents 48 drives, split across 6 SATA controllers. 
> So disks sda-sdh are on one controller, etc. In our 
> configuration, I run a RAID5 MD array for each controller, 
> then run LVM on top of these to form one large VolGroup.
> 

I might be missing something here, and I realise you'd lose 8 drives to
redundancy rather than 6, but wouldn't it have been better to have 8
arrays of 6 drives, each array using a single drive from each
controller?  That way a single controller failure (assuming no other HD
failures) wouldn't actually take any array down?  I do realise that 2
controller failures at the same time would lose everything.

Steve.

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.516 / Virus Database: 269.20.7/1286 - Release Date:
18/02/2008 18:49
 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Oliver Martin

Janek Kozicki schrieb:

hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.


Interesting. I'm seeing a 20% performance drop too, with default RAID 
and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M 
evenly, I'd think there shouldn't be such a big performance penalty.
It's not like I care that much, I only have 100 Mbps ethernet anyway. 
I'm just wondering...


$ hdparm -t /dev/md0

/dev/md0:
 Timing buffered disk reads:  148 MB in  3.01 seconds =  49.13 MB/sec

$ hdparm -t /dev/dm-0

/dev/dm-0:
 Timing buffered disk reads:  116 MB in  3.04 seconds =  38.20 MB/sec

dm doesn't do anything fancy to justify the drop (encryption etc). In 
fact, it doesn't do much at all yet - I intend to use it to join 
multiple arrays in the future when I have drives of different sizes, but 
right now, I only have 500GB drives. So it's just one PV in one VG in 
one LV.


Here's some more info:

$ mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
  Creation Time : Sat Nov 24 12:15:48 2007
 Raid Level : raid5
 Array Size : 976767872 (931.52 GiB 1000.21 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Feb 19 01:18:26 2008
  State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

 Layout : left-symmetric
 Chunk Size : 64K

   UUID : d41fe8a6:84b0f97a:8ac8b21a:819833c6 (local to host 
quassel)

 Events : 0.330016

Number   Major   Minor   RaidDevice State
   0   8   170  active sync   /dev/sdb1
   1   8   331  active sync   /dev/sdc1
   2   8   812  active sync   /dev/sdf1

$ pvdisplay
  --- Physical volume ---
  PV Name   /dev/md0
  VG Name   raid
  PV Size   931,52 GB / not usable 2,69 MB
  Allocatable   yes (but full)
  PE Size (KByte)   4096
  Total PE  238468
  Free PE   0
  Allocated PE  238468
  PV UUID   KadH5k-9Cie-dn5Y-eNow-g4It-lfuI-XqNIet

$ vgdisplay
  --- Volume group ---
  VG Name   raid
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  4
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV1
  Open LV   1
  Max PV0
  Cur PV1
  Act PV1
  VG Size   931,52 GB
  PE Size   4,00 MB
  Total PE  238468
  Alloc PE / Size   238468 / 931,52 GB
  Free  PE / Size   0 / 0
  VG UUID   AW9yaV-B3EM-pRLN-RTIK-LEOd-bfae-3Vx3BC

$ lvdisplay
  --- Logical volume ---
  LV Name/dev/raid/raid
  VG Nameraid
  LV UUIDeWIRs8-SFyv-lnix-Gk72-Lu9E-Ku7j-iMIv92
  LV Write Accessread/write
  LV Status  available
  # open 1
  LV Size931,52 GB
  Current LE 238468
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:0

--
Oliver
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Jon Nelson
On Feb 19, 2008 1:41 PM, Oliver Martin
<[EMAIL PROTECTED]> wrote:
> Janek Kozicki schrieb:
> > hold on. This might be related to raid chunk positioning with respect
> > to LVM chunk positioning. If they interfere there indeed may be some
> > performance drop. Best to make sure that those chunks are aligned together.
>
> Interesting. I'm seeing a 20% performance drop too, with default RAID
> and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M
> evenly, I'd think there shouldn't be such a big performance penalty.
> It's not like I care that much, I only have 100 Mbps ethernet anyway.
> I'm just wondering...
>
> $ hdparm -t /dev/md0
>
> /dev/md0:
>   Timing buffered disk reads:  148 MB in  3.01 seconds =  49.13 MB/sec
>
> $ hdparm -t /dev/dm-0
>
> /dev/dm-0:
>   Timing buffered disk reads:  116 MB in  3.04 seconds =  38.20 MB/sec

I'm getting better performance on a LV than on the underlying MD:

# hdparm -t /dev/md0

/dev/md0:
 Timing buffered disk reads:  408 MB in  3.01 seconds = 135.63 MB/sec
# hdparm -t /dev/raid/multimedia

/dev/raid/multimedia:
 Timing buffered disk reads:  434 MB in  3.01 seconds = 144.04 MB/sec
#

md0 is a 3-disk raid5, 64k chunk, alg. 2, using a bitmap comprised of
7200rpm sata drives from several manufacturers.



-- 
Jon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Iustin Pop
On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote:
> On Feb 19, 2008 1:41 PM, Oliver Martin
> <[EMAIL PROTECTED]> wrote:
> > Janek Kozicki schrieb:
> >
> > $ hdparm -t /dev/md0
> >
> > /dev/md0:
> >   Timing buffered disk reads:  148 MB in  3.01 seconds =  49.13 MB/sec
> >
> > $ hdparm -t /dev/dm-0
> >
> > /dev/dm-0:
> >   Timing buffered disk reads:  116 MB in  3.04 seconds =  38.20 MB/sec
> 
> I'm getting better performance on a LV than on the underlying MD:
> 
> # hdparm -t /dev/md0
> 
> /dev/md0:
>  Timing buffered disk reads:  408 MB in  3.01 seconds = 135.63 MB/sec
> # hdparm -t /dev/raid/multimedia
> 
> /dev/raid/multimedia:
>  Timing buffered disk reads:  434 MB in  3.01 seconds = 144.04 MB/sec
> #

As people are trying to point out in many lists and docs: hdparm is
*not* a benchmark tool. So its numbers, while interesting, should not be
regarded as a valid comparison.

Just my oppinion.

regards,
iustin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LVM performance

2008-02-19 Thread Peter Rabbitson

Oliver Martin wrote:
Interesting. I'm seeing a 20% performance drop too, with default RAID 
and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M 
evenly, I'd think there shouldn't be such a big performance penalty.


I am no expert, but as far as I have read you must not only have compatible 
chunk sizes (which is easy and most often the case). You also must stripe 
align the LVM chunks, so every chunk spans an even number of raid stripes (not 
raid chunks). Check the output of `dmsetup table`. The last number is the 
offset of the underlying block device at which the LVM data portion starts. It 
must be divisible by the raid stripe length (the length varies for different 
raid types).


Currently LVM does not offer an easy way to do such alignment, you have to do 
it manually upon executing pvcreate. By using the option --metadatasize one 
can specify the size of the area between the LVM header (64KiB) and the start 
of the data area. So one would supply STRIPE_SIZE - 64 for metadatasize[*], 
and the result will be a stripe aligned LVM.


This information is unverified, I just compiled it from different list threads 
and whatnot. I did this to my own arrays/volumes and I get near 100% raw 
speed. If someone else can confirm the validity of this - it would be great.


Peter

* The supplied number is always rounded up to be divisible by 64KiB, so the 
smallest total LVM header is at least 128KiB

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: How many drives are bad?

2008-02-19 Thread Guy Watkins


} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Steve Fairbairn
} Sent: Tuesday, February 19, 2008 2:45 PM
} To: 'Norman Elton'
} Cc: linux-raid@vger.kernel.org
} Subject: RE: How many drives are bad?
} 
} 
} >
} > The box presents 48 drives, split across 6 SATA controllers.
} > So disks sda-sdh are on one controller, etc. In our
} > configuration, I run a RAID5 MD array for each controller,
} > then run LVM on top of these to form one large VolGroup.
} >
} 
} I might be missing something here, and I realise you'd lose 8 drives to
} redundancy rather than 6, but wouldn't it have been better to have 8
} arrays of 6 drives, each array using a single drive from each
} controller?  That way a single controller failure (assuming no other HD
} failures) wouldn't actually take any array down?  I do realise that 2
} controller failures at the same time would lose everything.

Wow.  Sounds like what I said a few months ago.  I think I also recommended
RAID6.

Guy

} 
} Steve.
} 
} No virus found in this outgoing message.
} Checked by AVG Free Edition.
} Version: 7.5.516 / Virus Database: 269.20.7/1286 - Release Date:
} 18/02/2008 18:49
} 
} 
} -
} To unsubscribe from this list: send the line "unsubscribe linux-raid" in
} the body of a message to [EMAIL PROTECTED]
} More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: How many drives are bad?

2008-02-19 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Steve Fairbairn
} Sent: Tuesday, February 19, 2008 2:45 PM
} To: 'Norman Elton'
} Cc: linux-raid@vger.kernel.org
} Subject: RE: How many drives are bad?
}
}
} >
} > The box presents 48 drives, split across 6 SATA controllers.
} > So disks sda-sdh are on one controller, etc. In our
} > configuration, I run a RAID5 MD array for each controller,
} > then run LVM on top of these to form one large VolGroup.
} >
}
} I might be missing something here, and I realise you'd lose 8 drives to
} redundancy rather than 6, but wouldn't it have been better to have 8
} arrays of 6 drives, each array using a single drive from each
} controller?  That way a single controller failure (assuming no other HD
} failures) wouldn't actually take any array down?  I do realise that 2
} controller failures at the same time would lose everything.

Wow.  Sounds like what I said a few months ago.  I think I also recommended
RAID6.

Guy

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-02-19 Thread Peter Grandi
>> What sort of tools are you using to get these benchmarks, and can I
>> used them for ext3?

The only simple tools that I found that gives semi-reasonable
numbers avoiding most of the many pitfalls of storage speed
testing (almost all storage benchmarks I see are largely
meaningless) are recent versions of GNU 'dd' when used with the
'fdatsync' and 'direct' flags and Bonnie 1.4 with the options
'-u -y -o_direct', both used with a moderately large volume of
data (dependent on the size of the host adapter cache if any).

In particular one must be very careful when using older versions
of 'dd' or Bonnie, or using bonnie++, iozone (unless with -U or
-I), ...

[ ... ]

 > for i in 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536
 > do
 > cd /
 > umount /r1
 > mdadm -S /dev/md3
 > mdadm --create --assume-clean --verbose /dev/md3 --level=5 --raid-devices=10 
 > --chunk=$i --run /dev/sd[c-l]1
 > /etc/init.d/oraid.sh # to optimize my raid stuff
 > mkfs.xfs -f /dev/md3
 > mount /dev/md3 /r1 -o logbufs=8,logbsize=262144
 > /usr/bin/time -f %E -o ~/$i=chunk.txt bash -c 'dd if=/dev/zero 
 > of=/r1/bigfile bs=1M count=10240; sync'
 > done

I would not consider the results from this as particularly
meaningful (that 'sync' only helps a little bit) even for
large sequential write testing. One would have also to document
the elevator used and the flushed daemon parameters.

Let's say that storage benchmarking is a lot more difficult and
subtle than it looks like to the untrained eye.

It is just so much easier to use Bonnie 1.4 (with the flags
mentioned above) as a first (and often last) approximation (but
always remember to mention which elevator was in use).
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-19 Thread Alexander Kühn

- Message from [EMAIL PROTECTED] -
Date: Mon, 18 Feb 2008 19:05:02 +
From: Peter Grandi <[EMAIL PROTECTED]>
Reply-To: Peter Grandi <[EMAIL PROTECTED]>
 Subject: Re: RAID5 to RAID6 reshape?
  To: Linux RAID 



On Sun, 17 Feb 2008 07:45:26 -0700, "Conway S. Smith"
<[EMAIL PROTECTED]> said:



Consider for example the answers to these questions:

* Suppose you have a 2+1 array which is full. Now you add a disk
  and that means that almost all free space is on a single disk.
  The MD subsystem has two options as to where to add that lump
  of space, consider why neither is very pleasant.


No, only one, at the end of the md device and the "free space" will be  
evenly distributed among the drives.



* How fast is doing unaligned writes with a 13+1 or a 12+2
  stripe? How often is that going to happen, especially on an
  array that started as a 2+1?


They are all the same speed with raid5 no matter what you started  
with. You read two blocks and you write two blocks. (not even chunks  
mind you)



* How long does it take to rebuild parity with a 13+1 array or a
  12+2 array in case of s single disk failure? What happens if a
  disk fails during rebuild?


Depends on how much data the controllers can push. But at least with  
my hpt2320 the limiting factor is the disk speed and that doesn't  
change whether I have 2 disks or 12.



* When you have 13 drives and you add the 14th, how long does
  that take? What happens if a disk fails during rebuild??


..again pretty much the same as adding a fourth drive to a three-drives raid5.
It will continue to be degraded..nothing special.


beolach> Well, I was reading that LVM2 had a 20%-50% performance
beolach> penalty, which in my mind is a really big penalty. But I
beolach> think those numbers where from some time ago, has the
beolach> situation improved?

LVM2 relies on DM, which is not much slower than say 'loop', so
it is almost insignificant for most people.


I agree.


But even if the overhead may be very very low, DM/LVM2/EVMS seem
to me to have very limited usefulness (e.g. Oracle tablespaces,
and there are contrary opinions as to that too). In your stated
applications it is hard to see why you'd want to split your
arrays into very many block devices or why you'd want to resize
them.


I think the idea is to be able to have more than just one device to  
put a filesystem on. For example a / filesystem, swap and maybe  
something like /storage comes to mind. Yes, one could to that with  
partitioning but lvm was made for this so why not use it.


The situation looks different with Raid6, there the write penalty  
becomes higher with more disks but not with raid5.

Regards,
Alex.

- End message from [EMAIL PROTECTED] -




- --
Alexander Kuehn

Cell phone: +49 (0)177 6461165
Cell fax:   +49 (0)177 6468001
Tel @Home:  +49 (0)711 6336140
Mail mailto:[EMAIL PROTECTED]



cakebox.homeunix.net - all the machine one needs..



pgpiiwEnUQD98.pgp
Description: PGP Digital Signature