[EMAIL PROTECTED] wrote:
> We are waiting for the one day where the same block on all mirrors has
> read problems. Ok, we're now waiting for about 15 years because the
> HPUX mirror strategy is the same. Quite a long time without desaster
> but it will happen (till today Murphy was right in any cas
>I can go back and put together a patch over the weekend if anyone is
>interested in using it.
>
>-dinesh
>[EMAIL PROTECTED]
>-
Oh yes, please make this patch. We are very very interested in it!
We are waiting for the one day where the same block on all mirrors has
read problems. Ok, we're
Hi,
The Linux RAID crushes when recuntructing a disk.
We have a RAID 1 disk over two active SATA disks and one spare SATA
disk.
I was probing the RAID and I finded that the RAID ocasionally crashes.
I adjunt to this email the secuences of commands I wrote.
When I swap one disk on the R
J. David Beutel <[EMAIL PROTECTED]> wrote:
> Peter T. Breuer wrote, on 2005-Feb-23 1:50 AM:
>
> > Quite possibly - I never tested the rewrite part of the patch, just
> >
> >wrote it to indicate how it should go and stuck it in to encourage
> >others to go on from there. It's disabled by default.
In gmane.linux.raid Nagpure, Dinesh <[EMAIL PROTECTED]> wrote:
> I noticed the discussion about robust read on the RAID list and similar one
> on the EVMS list so I am sending this mail to both the lists. Latent media
> faults which prevent data from being read from portions of a disk has always
>
This is very good! But most of my disk space is RAID5. Any chance you have
similar plans for RAID5?
Thanks,
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nagpure, Dinesh
Sent: Wednesday, February 23, 2005 2:56 PM
To: '[EMAIL PROTECTED]'
Cc: 'linux
Peter T. Breuer wrote, on 2005-Feb-23 1:50 AM:
Quite possibly - I never tested the rewrite part of the patch, just
wrote it to indicate how it should go and stuck it in to encourage
others to go on from there. It's disabled by default. You almost
certainly don't want to enable it unless you are a
Nagpure, Dinesh wrote, on 2005-Feb-23 9:55 AM:
I can go back and put together a patch over the weekend if anyone is
interested in using it.
Yes, please, I'm very interested in using it.
Cheers,
11011011
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a mes
Hi,
I noticed the discussion about robust read on the RAID list and similar one
on the EVMS list so I am sending this mail to both the lists. Latent media
faults which prevent data from being read from portions of a disk has always
been a concern for us. Such faults will go undetected till the tim
We must be in 2 different worlds!!!
I can have wide LVD and narrow SE on the same card (2940U2W). And wide
ultra and narrow SE on the same card (2940UW). That is why the card is so
good. IMO. Just not with linux. The OS that must not be named, supports
the above. :( In fact, I have a PC wit
Guy wrote:
I know a thing or 2 about SCSI. I know I had it correct. 1 config was all
wide LVD (2940U2W). My card has a LVD and a SE port on the same logical
SCSI bus.
I was surprized once when I noticied such "logical SCSI bus" really isn't
"logical" per se. I mean, if I plug ANY device into th
Michael Tokarev <[EMAIL PROTECTED]> wrote:
> (note raid5 performs faster than a single drive, it's expectable
> as it is possible to write to several drives in parallel).
Each raid5 write must include at least ONE write to a target. I think
you're saying that the writes go to different targets fr
dean gaudet wrote:
On Tue, 22 Feb 2005, Michael Tokarev wrote:
When debugging some other problem, I noticied that
direct-io (O_DIRECT) write speed on a software raid5
is terrible slow. Here's a small table just to show
the idea (not numbers by itself as they vary from system
to system but how the
Mike Hardy <[EMAIL PROTECTED]> writes:
> I posted a raid5 parity calculator implemented in perl a while back (a
> couple weeks?) that is capable of taking your disk geometry, the RAID
> LBA you're interested in, and finding the disk sector it belongs to.
>
> I honestly don't remember if it can go
This isn't a real bug as the smallest slab-size is 32 bytes
but please apply for consistency.
Found by the Coverity tool.
Signed-off-by: Alexander Nyberg <[EMAIL PROTECTED]>
= drivers/md/raid1.c 1.105 vs edited =
--- 1.105/drivers/md/raid1.c2005-01-08 06:44:10 +01:00
+++ edited/driv
J. David Beutel <[EMAIL PROTECTED]> wrote:
> I'd like to try this patch
> http://marc.theaimsgroup.com/?l=linux-raid&m=110704868115609&w=2 with
> EVMS BBR.
>
> Has anyone tried it on 2.6.10 (with FC2 1.9 and EVMS patches)? Has
> anyone tried the rewrite part at all? I don't know md or the ker
I'd like to try this patch
http://marc.theaimsgroup.com/?l=linux-raid&m=110704868115609&w=2 with
EVMS BBR.
Has anyone tried it on 2.6.10 (with FC2 1.9 and EVMS patches)? Has
anyone tried the rewrite part at all? I don't know md or the kernel or
this patch, but the following lines of the patc
17 matches
Mail list logo