Veritas Volume Manager has a "virtual" device driver (vxio) which handles the
intermediary step between the actual device driver and the higher level.
This allows for some advanced RAID possibilities - if a write fails to
complete, the ioctl returns a value of -1... That way you can execute a
Veri
> Umm. Isn't RAID implemented as the md device? That implies that it is
> responsible for some kind of error management. Bluntly, the file systems
> don't declare a file system kaput until they've retried the critical
> I/O operations. Why should RAID5 be any less tolerant?
File systems give up t
> any data, but under normal default drive setup the sector will not be
> reallocated. If testing the failing sector is too much effort, a
> simple overwrite with the corrected data, at worst, improves the
> chances of the drive firmware being able to reallocate the sector.
> This works just f
Alan Cox wrote:
>
> > > 1) Read and write errors should be retried at least once before kicking
> > >the drive out of the array.
> >
> > This doesn't seem unreasonable on the face of it.
>
> Device level retries are the job of the device level driver
>
> > > 2) On more persistent read error
On Wed, 21 Mar 2001, Max TenEyck Woodbury wrote:
>
> Umm. Isn't RAID implemented as the md device? That implies that it is
> responsible for some kind of error management. Bluntly, the file systems
> don't declare a file system kaput until they've retried the critical
> I/O operations. Why shoul
Alan Cox wrote:
>
>>> 1) Read and write errors should be retried at least once before kicking
>>>the drive out of the array.
>>
>> This doesn't seem unreasonable on the face of it.
>
> Device level retries are the job of the device level driver
Umm. Isn't RAID implemented as the md device?
On Wednesday March 21, [EMAIL PROTECTED] wrote:
>
> My question is based upon prior experience working for Stratus Computer. At
> Stratus it was impractical to go beat the disk drives with a hammer to cause
> them to fail - rather we would simply use a utility to cause the disk driver
> to begin
>Need recommendations for Linux supported pci-IDE controllers that
>have more than 2 channels on a plug in card.
I have performance measurements and other comments for the 3ware Escalade
6800 8-channel/drive controllers at the following URL (corrected):
http://www.research.att.com/~gjm/linux/ide-
> > 1) Read and write errors should be retried at least once before kicking
> >the drive out of the array.
>
> This doesn't seem unreasonable on the face of it.
Device level retries are the job of the device level driver
> > 2) On more persistent read errors, the failed block (or whatever u
>Need recommendations for Linux supported pci-IDE controllers that
>have more than 2 channels on a plug in card.
I have performance measurements and other comments for the 3ware Escalade
6800 8-channel/drive controllers at the following URL:
http://euphony.research.att.com/~gjm/linux/ide-3wraid.h
Need recommendations for Linux supported pci-IDE controllers that
have more than 2 channels on a plug in card.
Michael
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
If you go to the Linux kernel config (2.4), SCSI -> Low level drivers,
there is a line there that mentions a debugging SCSI driver that can be
programmed to generate errors.
I have no idea what it does though.. Just wanted to point it out.
++Jos
And thus it came to pass that Richard Schaal
Greetings!
I have set up a system with a IDE raid1 for /boot, / , and swap area and a
SCSI raid5 for data area, and it seems to be working just fine now that I
turned on DMA as a kernel option for my IDE drives in the raid1. ( I got
almost 80% improvement in thruput on the raid1 after the change.
On Tue, 20 Mar 2001 07:06:04 +1100 (EST), Neil Brown wrote:
[snip]
>
>Try re-arranging the drives on the scsi chain. If the questionable
>one is currently furthest from the host-adapter, make it closest. See
>if that has any effect.
>It could well be cabling, or terminators or something. Or it
Dear all,
We're running problem with RAID1 config in our systm. I don't know if it is
the problem of the disk or the RAID code/DMA code on i810?
We'd like to know that is the RAID1 could not really protect a single disk
failure and crashing our partiton, or our config has something wrong?
We're
15 matches
Mail list logo