Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first disk,
and 64-128K from the second. these could happen at the
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
why linux raid isn't Raid really, why it can be worse than plain
disk) is when, after single sector read failure, md kicks the whole
disk off the array, and when you start resync (after replacing
Max Waterman wrote:
Still, it seems like it should be a solvable problem...if you order the
data differently on each disk; for example, in the two disk case,
putting odd and even numbered 'stripes' on different platters [or sides
of platters].
The only problem there is determining the
On Mer, 2006-01-18 at 09:14 +0100, Sander wrote:
If the (harddisk internal) remap succeeded, the OS doesn't see the bad
sector at all I believe.
True for ATA, in the SCSI case you may be told about the remap having
occurred but its a by the way type message not an error proper.
If you (the
On Tue, Jan 17, 2006 at 12:09:27PM +, Andy Smith wrote:
I'm wondering: how well does md currently make use of the fact there
are multiple devices in the different (non-parity) RAID levels for
optimising reading and writing?
Thanks all for your answers.
signature.asc
Description: Digital
Max Waterman [EMAIL PROTECTED] wrote:
Still, it seems like it should be a solvable problem...if you order the
data differently on each disk; for example, in the two disk case,
putting odd and even numbered 'stripes' on different platters [or sides
Well, unfortunately for todays hard disks
On Wednesday January 18, [EMAIL PROTECTED] wrote:
Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first disk,
Sander wrote:
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
why linux raid isn't Raid really, why it can be worse than plain
disk) is when, after single sector read failure, md kicks the whole
disk off the array, and when you start resync
Max Waterman wrote:
Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first
disk,
and 64-128K from the second. these
personally, I think this this useful functionality, but my personal
preference is that this would be in DM/LVM2 rather than MD. but given
Neil is the MD author/maintainer, I can see why he'd prefer to do it in
MD. :)
Why don't MD and DM merge some bits?
Jan Engelhardt
--
-
To unsubscribe
2006/1/18, Mario 'BitKoenig' Holbe [EMAIL PROTECTED]:
Mario 'BitKoenig' Holbe [EMAIL PROTECTED] wrote:
scheduled read-requests. Would it probably make sense to split one
single read over all mirrors that are currently idle?
A I got it from the other thread - seek times :)
Perhaps using
On Wednesday January 18, [EMAIL PROTECTED] wrote:
personally, I think this this useful functionality, but my personal
preference is that this would be in DM/LVM2 rather than MD. but given
Neil is the MD author/maintainer, I can see why he'd prefer to do it in
MD. :)
Why don't MD and DM
On Wednesday January 18, [EMAIL PROTECTED] wrote:
Hi,
Are there any known issues with changing the number of active devices in
a RAID1 array?
There is now, thanks.
I'm trying to add a third mirror to an existing RAID1 array of two disks.
I have /dev/md5 as a mirrored pair of two 40
On Wednesday January 18, [EMAIL PROTECTED] wrote:
2006/1/18, Mario 'BitKoenig' Holbe [EMAIL PROTECTED]:
Mario 'BitKoenig' Holbe [EMAIL PROTECTED] wrote:
scheduled read-requests. Would it probably make sense to split one
single read over all mirrors that are currently idle?
A I got it
On Wednesday January 18, [EMAIL PROTECTED] wrote:
On Wed, 18 Jan 2006, John Hendrikx wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the
On Wednesday January 18, [EMAIL PROTECTED] wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the array for a simple
read failure. A rebuild
On Wednesday January 18, [EMAIL PROTECTED] wrote:
hi,
I have a silly question. Why md request buffers will not
across devices? That means Why a bh will only locate in a single
storage device? I guess maybe file system has aligned the bh? Who
can tell me the exact reasons? Thanks a lot!
On Tuesday January 17, [EMAIL PROTECTED] wrote:
Hello Neil ,
On Tue, 17 Jan 2006, NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or
On Tuesday January 17, [EMAIL PROTECTED] wrote:
NeilBrown == NeilBrown [EMAIL PROTECTED] writes:
NeilBrown Previously the array of disk information was included in
NeilBrown the raid5 'conf' structure which was allocated to an
NeilBrown appropriate size. This makes it awkward to change
On Tuesday January 17, [EMAIL PROTECTED] wrote:
On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially
On Tuesday January 17, [EMAIL PROTECTED] wrote:
NeilBrown wrote (ao):
+config MD_RAID5_RESHAPE
Would this also be possible for raid6?
Yes. The will follow once raid5 is reasonably reliable. It is
essentially the same change to a different file.
(One day we will merge raid5 and raid6
While we're at it, here's a little issue I had with RAID5 ; not really
the fault of md, but you might want to know...
I have a 5x250GB RAID5 array for home storage (digital photo, my lossless
ripped cds, etc). 1 IDE Drive ave 4 SATA Drives.
Now, turns out one of the SATA drives is a
22 matches
Mail list logo