i.e. one or more drives can be added and the array will re-stripe
while on-line.
Most of the interesting work was already done for raid5.
This just extends it to raid6.
mdadm newer than 2.6 is needed for complete safety, however any
version of mdadm which support raid5 reshape will do a good enou
An error always aborts any resync/recovery/reshape on the understanding
that it will immediately be restarted if that still makes sense.
However a reshape currently doesn't get restarted. With this patch
it does.
To avoid restarting when it is not possible to do work, we call
into the personalit
From: "H. Peter Anvin" <[EMAIL PROTECTED]>
- Use kernel_fpu_begin() and kernel_fpu_end()
- Use boot_cpu_has() for feature testing even in userspace
Signed-off-by: H. Peter Anvin <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid6mmx.c | 1
The mddev and queue might be used for another array which does not
set these, so they need to be cleared.
Signed-off-by: NeilBrown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |3 +++
1 file changed, 3 insertions(+)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/driver
There are two errors that can lead to recovery problems with raid10
when used in 'far' more (not the default).
Due to a '>' instead of '>=' the wrong block is located which would
result in garbage being written to some random location, quite
possible outside the range of the device, causing the n
md tries to warn the user if they e.g. create a raid1 using two partitions
of the same device, as this does not provide true redundancy.
However it also warns if a raid0 is created like this, and there is
nothing wrong with that.
At the place where the warning is currently printer, we don't nece
Following 6 patches are against 2.6.20 and are suitable for 2.6.21.
They are not against -mm because the new plugging makes raid5 not work
and so not testable, and there are a few fairly minor intersections between
these patches and those patches.
There is also a very minor conflict with the hardwa
Richard Scobie wrote:
Thought this paper may be of interest. A study done by Google on over
100,000 drives they have/had in service.
http://labs.google.com/papers/disk_failures.pdf
Bastards:
"Failure rates are known to be highly correlated with drive
models, manufacturers and vintages [18].
Disks are sealed, and a dessicant is present in each to keep humidity down.
If you ever open a disk drive (e.g. for the magnets, or the mirror quality
platters, or for fun) then you can see the dessicant sachet.
cheers
Al Boldi wrote:
> Richard Scobie wrote:
>
>>Thought this paper may be of inte
Hey Neil,
I tested this new patch and it seems to work! I'm going to do some
more vigorous testing, and I'll let you know if any more issues bubble
out. Thanks!
-John
On 2/15/07, Neil Brown <[EMAIL PROTECTED]> wrote:
On Thursday February 15, [EMAIL PROTECTED] wrote:
> Ok tried the patch and go
On Mon, 19 Feb 2007, Marc Marais wrote:
On Sun, 18 Feb 2007 07:13:28 -0500 (EST), Justin Piszcz wrote
On Sun, 18 Feb 2007, Marc Marais wrote:
On Sun, 18 Feb 2007 20:39:09 +1100, Neil Brown wrote
On Sunday February 18, [EMAIL PROTECTED] wrote:
Ok, I understand the risks which is why I did
Richard Scobie wrote:
> Thought this paper may be of interest. A study done by Google on over
> 100,000 drives they have/had in service.
>
> http://labs.google.com/papers/disk_failures.pdf
Interesting link. They seem to point out that smart not necessarily warns of
pending failure. This is pro
12 matches
Mail list logo