On 6/12/07, Neil Brown <[EMAIL PROTECTED]> wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
>
> I can provided more detailed information; please just ask.
>
A complete sysrq trace (all processes) might help.
I'll send it to you off list.
thanks,
Mike
-
To unsubscribe from this list: send t
On Tuesday June 12, [EMAIL PROTECTED] wrote:
>
> I can provided more detailed information; please just ask.
>
A complete sysrq trace (all processes) might help.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More m
When using raid1 with one local member and one nbd member (marked as
write-mostly) MD hangs when trying to format /dev/md0 with ext3. Both
'cat /proc/mdstat' and 'mdadm --detail /dev/md0' hang infinitely.
I've not tried to reproduce on 2.6.18 or 2.6.19ish kernel.org kernels
yet but this issue aff
> I assume that you're prepared to repair all that damage to your tree, but
> it seems a bit masochistic?
It's either this or have an inconsistent coding style throughout
raid5.c. I figure it is worth it to have reduced code duplication
between raid5 and raid6, and it makes it easier to add new
> From: Andrew Morton [mailto:[EMAIL PROTECTED]
> Unfortunately these cleanups get into a huge fight with your very own
> git-md-accel.patch:
>
Yes, you missed the note that said:
Note, I have not rebased git-md-accel yet. While that is
happening I
wanted to have this patch out f
On Tue, 12 Jun 2007 10:41:03 -0700
Dan Williams <[EMAIL PROTECTED]> wrote:
> Most of the raid5 code predates git so the coding style violations have
> been present for a long time. However, now that major new patches are
> arriving, checkpatch.pl complains about these old violations. Instead of
I have a problem with a software RAID-1 implementation on the Linux 2.6.9
kernel using mdadm. The mdadm version is 1.12.0 and when I use yum with
Centos 4.4, it appears to be updated for this distro. The array in question
showed as being degraded (I think that was the term it used). I have had
t
On Tue, 12 Jun 2007, Bill Davidsen wrote:
Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the write
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1
NeilBrown wrote:
Following are a couple of bugfixes for raid10 and raid1. They only
affect fairly uncommon configurations (more than 2 mirrors) and can
cause data corruption. Thay are suitable for 2.6.22 and 21-stable.
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix two raid10 bugs.
[PATCH 002
Neil Brown wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
Hello everyone.
I've got a SLES9 SP3 running and I've been quite happy with it so far.
Recently, I've created a 4 disk spanning RAID-5 on our company
server. Runs quite nice and we're happy with that too. I created
that RAID usi
Neil Brown wrote:
On Monday June 11, [EMAIL PROTECTED] wrote:
Jan Engelhardt wrote:
Hi,
RAID levels 0 and 4 do not seem to like the -b internal. Is this
intentional? Runs 2.6.20.2 on i586.
(BTW, do you already have a PAGE_SIZE=8K fix?)
14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.
On Tuesday June 12, [EMAIL PROTECTED] wrote:
>
> I am very sorry, but it wont works with .9 superblocks also :( We
> missing something small, but important here. Before you start to code.
> mdadm was running in monitor mode, and reported a Fail. mdadm is the
> latest version, 2.6.2.
>
> tg
Hm
Neil Brown írta:
(reads code).
Ahhh. You are using version-1 superblocks aren't you? That code only
works for version-0.90 superblocks. That was careless of me. It
shouldn't be hard to make it work more generally, but it looks like it
will be slightly more than trivial. I'll try to get you a
On Tuesday June 12, [EMAIL PROTECTED] wrote:
> Hello everyone.
>
> I've got a SLES9 SP3 running and I've been quite happy with it so far.
>
> Recently, I've created a 4 disk spanning RAID-5 on our company
> server. Runs quite nice and we're happy with that too. I created
> that RAID using the SLE
Hello everyone.
I've got a SLES9 SP3 running and I've been quite happy with it so far.
Recently, I've created a 4 disk spanning RAID-5 on our company server. Runs
quite nice and we're happy with that too. I created that RAID using the SLES
mdadm (1.4 I believe) package.
After discovering that
16 matches
Mail list logo