Re: Few questions

2007-12-08 Thread David Greaves
Guy Watkins wrote:
> man md
> man mdadm
and
http://linux-raid.osdl.org/index.php/Main_Page

:)

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


/proc/mdstat docs (was Re: Few questions)

2007-12-08 Thread David Greaves
Michael Makuch wrote:
> So my questions are:
...
> - Is this a.o.k for a raid5 array?

So I realised that /proc/mdstat isn't documented too well anywhere...

http://linux-raid.osdl.org/index.php/Mdstat

Comments welcome...

David
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: /proc/mdstat docs (was Re: Few questions)

2007-12-08 Thread Raz
many thanks . david.
It is very useful.

On 12/8/07, David Greaves <[EMAIL PROTECTED]> wrote:
> Michael Makuch wrote:
> > So my questions are:
> ...
> > - Is this a.o.k for a raid5 array?
>
> So I realised that /proc/mdstat isn't documented too well anywhere...
>
> http://linux-raid.osdl.org/index.php/Mdstat
>
> Comments welcome...
>
> David
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mounting raid5 with different unit values

2007-12-08 Thread Raz
Well... this thing actually works just fine with a newer kernel (
2.6.18-8-el5 centos5 ). I managed to mount / mkfs.xfs over raid5 with
a pseudo raid5 unit size, and with the appropriate raid 5 patches and
user space access-pattern, I elimintaed in 99% cases the read penalty
.
I sincerly hope I won't be getting any crashes with this file system tunnings.
so ... first, chris and all you xfs guys, many many thanks.
Chris,  How "dangerous" these tunnings are ? Am I to expect "weird"
behaviour of the file system ?

On 10/8/07, Chris Wedgwood <[EMAIL PROTECTED]> wrote:
> On Sun, Oct 07, 2007 at 11:48:14AM -0400, Justin Piszcz wrote:
>
> > man mount :)
>
> Ah of course.
>
> But those will be more restrictive that what you can specify when you
> make the file-system (because mkfs.xfs can aligned the AGs to suit).
>


-- 
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Reading takes 100% precedence over writes for mdadm+raid5?

2007-12-08 Thread Jon Nelson
This is what dstat shows me copying lots of large files about (ext3),
one file at a time.
I've benchmarked the raid itself around 65-70 MB/s maximum actual
write I/O so this 3-4MB/s stuff is pretty bad.

I should note that ALL other I/O suffers horribly, even on other filesystems.
What might the cause be?

I should note: going larger in stripe_cache_size (384 and 512)
performance stays the same, going smaller (128) performance
*increases* and stays more steady to 10-13 MB/s.


total-cpu-usage --dsk/sda-- --dsk/sdb-- --dsk/sdc--
--dsk/sdd-- -dsk/total->
usr sys idl wai hiq siq| read  writ: read  writ: read  writ: read
writ: read  writ>
  1   1  95   3   0   0|  12k 4261B: 106k  125k:  83k  110k:  83k
110k: 283k  348k>
  0   5   0  91   1   2|   0 0 :2384k 4744k:2612k 4412k:2336k
4804k:7332k   14M>
  0   4   0  91   1   3|   0 0 :2352k 4964k:2392k 4812k:2620k
4764k:7364k   14M>
  0   4   0  92   1   3|   0 0 :1068k 3524k:1336k 3184k:1360k
2912k:3764k 9620k>
  0   4   0  92   1   2|   0 0 :2304k 2612k:2128k 2484k:2332k
3028k:6764k 8124k>
  0   4   0  92   1   2|   0 0 :1584k 3428k:1252k 3992k:1592k
3416k:4428k   11M>
  0   3   0  93   0   2|   0 0 :1400k 2364k:1424k 2700k:1584k
2592k:4408k 7656k>
  0   4   0  93   1   2|   0 0 :1764k 3084k:1820k 2972k:1796k
2396k:5380k 8452k>
  0   4   0  92   2   3|   0 0 :1984k 3736k:1772k 4024k:1792k
4524k:5548k   12M>
  0   4   0  93   1   2|   0 0 :1852k 3860k:1840k 3408k:1696k
3648k:5388k   11M>
  0   4   0  93   0   2|   0 0 :1328k 2500k:1640k 2348k:1672k
2128k:4640k 6976k>
  0   4   0  92   0   4|   0 0 :1624k 3944k:2080k 3432k:1760k
3704k:5464k   11M>
  0   1   0  97   1   2|   0 0 :1480k 1340k: 976k 1564k:1268k
1488k:3724k 4392k>
  0   4   0  92   1   2|   0 0 :1320k 2676k:1608k 2548k: 968k
2572k:3896k 7796k>
  0   2   0  96   1   1|   0 0 :1856k 1808k:1752k 1988k:1752k
1600k:5360k 5396k>
  0   4   0  92   2   1|   0 0 :1360k 2560k:1240k 2788k:1580k
2940k:4180k 8288k>
  0   2   0  97   1   2|   0 0 :1928k 1456k:1628k 2080k:1488k
2308k:5044k 5844k>
  1   3   0  94   2   2|   0 0 :1432k 2156k:1320k 1840k: 936k
1072k:3688k 5068k>
  0   3   0  93   2   2|   0 0 :1760k 2164k:1440k 2384k:1276k
2972k:4476k 7520k>
  0   3   0  95   1   2|   0 0 :1088k 1064k: 896k 1424k:1152k
992k:3136k 3480k>
  0   0   0  96   0   2|   0 0 : 976k  888k: 632k 1120k:1016k
968k:2624k 2976k>
  0   2   0  94   1   2|   0 0 :1120k 1864k: 964k 1776k:1060k
1856k:3144k 5496k>

-- 
Jon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: /proc/mdstat docs (was Re: Few questions)

2007-12-08 Thread Michael Makuch

David Greaves wrote:

Michael Makuch wrote:
  

So my questions are:


...
  

- Is this a.o.k for a raid5 array?



So I realised that /proc/mdstat isn't documented too well anywhere...

http://linux-raid.osdl.org/index.php/Mdstat

Comments welcome...

David
  


One thing, in the section "md device line" you describe how to identify 
spare devices,
but you didn't mention the "(S)" which appears after a spare device, at 
least it does on mine:


# uname -a
Linux pecan.makuch.org 2.6.23.1-21.fc7 #1 SMP Thu Nov 1 21:09:24 EDT 
2007 i686 i686 i386 GNU/Linux


# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 etherd/e0.0[0] etherd/e0.2[9](S) etherd/e0.9[8] 
etherd/e0.8[7] etherd/e0.7[6] etherd/e0.6[5] etherd/e0.5[4] 
etherd/e0.4[3] etherd/e0.3[2] etherd/e0.1[1]

 3907091968 blocks level 5, 64k chunk, algorithm 2 [9/9] [U]
 [==>..]  resync = 91.0% (444527040/488386496) 
finish=560.9min speed=1301K/sec

unused devices: 


Unless that means something else??? But e0.2 is my spare so I'm just 
assuming "(S)" means spare!


Thanks
Mike


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Few questions

2007-12-08 Thread Michael Makuch

Guy Watkins wrote:

man md
man mdadm

I use RAID6.  Happy with it so far, but haven't had a disk failure yet.
RAID5 sucks because if you have 1 failed disk and 1 bad block on any other
disk, you are hosed.

Hope that helps.
I can't believe I've been using a raid array for 2 years and didn't know 
'man md' was there. I've

lived and breathed off of Software-RAID-HOWTO.html but it never mentions it.

Just a suggestion to whom it may concern: it might be nice to mention 
'man md' in the Howto as well

as the wiki pages.

Thanks

Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html