Re: raid1 with nbd member hangs MD on SLES10 and RHEL5

2007-06-12 Thread Mike Snitzer
On 6/12/07, Neil Brown <[EMAIL PROTECTED]> wrote: On Tuesday June 12, [EMAIL PROTECTED] wrote: > > I can provided more detailed information; please just ask. > A complete sysrq trace (all processes) might help. I'll send it to you off list. thanks, Mike - To unsubscribe from this list: send t

Re: raid1 with nbd member hangs MD on SLES10 and RHEL5

2007-06-12 Thread Neil Brown
On Tuesday June 12, [EMAIL PROTECTED] wrote: > > I can provided more detailed information; please just ask. > A complete sysrq trace (all processes) might help. NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More m

raid1 with nbd member hangs MD on SLES10 and RHEL5

2007-06-12 Thread Mike Snitzer
When using raid1 with one local member and one nbd member (marked as write-mostly) MD hangs when trying to format /dev/md0 with ext3. Both 'cat /proc/mdstat' and 'mdadm --detail /dev/md0' hang infinitely. I've not tried to reproduce on 2.6.18 or 2.6.19ish kernel.org kernels yet but this issue aff

Re: raid5: coding style cleanup / refactor

2007-06-12 Thread Dan Williams
> I assume that you're prepared to repair all that damage to your tree, but > it seems a bit masochistic? It's either this or have an inconsistent coding style throughout raid5.c. I figure it is worth it to have reduced code duplication between raid5 and raid6, and it makes it easier to add new

RE: raid5: coding style cleanup / refactor

2007-06-12 Thread Williams, Dan J
> From: Andrew Morton [mailto:[EMAIL PROTECTED] > Unfortunately these cleanups get into a huge fight with your very own > git-md-accel.patch: > Yes, you missed the note that said: Note, I have not rebased git-md-accel yet. While that is happening I wanted to have this patch out f

Re: raid5: coding style cleanup / refactor

2007-06-12 Thread Andrew Morton
On Tue, 12 Jun 2007 10:41:03 -0700 Dan Williams <[EMAIL PROTECTED]> wrote: > Most of the raid5 code predates git so the coding style violations have > been present for a long time. However, now that major new patches are > arriving, checkpatch.pl complains about these old violations. Instead of

Mdadm recovery in infinite loop

2007-06-12 Thread Kyle Harris
I have a problem with a software RAID-1 implementation on the Linux 2.6.9 kernel using mdadm. The mdadm version is 1.12.0 and when I use yum with Centos 4.4, it appears to be updated for this distro. The array in question showed as being degraded (I think that was the term it used). I have had t

Re: below 10MB/s write on raid5

2007-06-12 Thread Justin Piszcz
On Tue, 12 Jun 2007, Bill Davidsen wrote: Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM. The four RAID disks are attached to the board's onbaord sATA controller

Re: below 10MB/s write on raid5

2007-06-12 Thread Bill Davidsen
Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM. The four RAID disks are attached to the board's onbaord sATA controller (Sil3114 attached via PCI) Kernel is 2.6.21.1

Re: [PATCH 000 of 2] md: Introduction - bugfixes for md/raid{1,10}

2007-06-12 Thread Bill Davidsen
NeilBrown wrote: Following are a couple of bugfixes for raid10 and raid1. They only affect fairly uncommon configurations (more than 2 mirrors) and can cause data corruption. Thay are suitable for 2.6.22 and 21-stable. Thanks, NeilBrown [PATCH 001 of 2] md: Fix two raid10 bugs. [PATCH 002

Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)

2007-06-12 Thread Bill Davidsen
Neil Brown wrote: On Tuesday June 12, [EMAIL PROTECTED] wrote: Hello everyone. I've got a SLES9 SP3 running and I've been quite happy with it so far. Recently, I've created a 4 disk spanning RAID-5 on our company server. Runs quite nice and we're happy with that too. I created that RAID usi

Re: Some RAID levels do not support bitmap

2007-06-12 Thread Bill Davidsen
Neil Brown wrote: On Monday June 11, [EMAIL PROTECTED] wrote: Jan Engelhardt wrote: Hi, RAID levels 0 and 4 do not seem to like the -b internal. Is this intentional? Runs 2.6.20.2 on i586. (BTW, do you already have a PAGE_SIZE=8K fix?) 14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.

Re: spare group

2007-06-12 Thread Neil Brown
On Tuesday June 12, [EMAIL PROTECTED] wrote: > > I am very sorry, but it wont works with .9 superblocks also :( We > missing something small, but important here. Before you start to code. > mdadm was running in monitor mode, and reported a Fail. mdadm is the > latest version, 2.6.2. > > tg Hm

Re: spare group

2007-06-12 Thread Tomka Gergely
Neil Brown írta: (reads code). Ahhh. You are using version-1 superblocks aren't you? That code only works for version-0.90 superblocks. That was careless of me. It shouldn't be hard to make it work more generally, but it looks like it will be slightly more than trivial. I'll try to get you a

Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)

2007-06-12 Thread Neil Brown
On Tuesday June 12, [EMAIL PROTECTED] wrote: > Hello everyone. > > I've got a SLES9 SP3 running and I've been quite happy with it so far. > > Recently, I've created a 4 disk spanning RAID-5 on our company > server. Runs quite nice and we're happy with that too. I created > that RAID using the SLE

SLES 9 SP3 and mdadm 2.6.1 (via rpm)

2007-06-12 Thread Thorsten Wolf
Hello everyone. I've got a SLES9 SP3 running and I've been quite happy with it so far. Recently, I've created a 4 disk spanning RAID-5 on our company server. Runs quite nice and we're happy with that too. I created that RAID using the SLES mdadm (1.4 I believe) package. After discovering that