dean gaudet wrote:
[]
if this is for a database or fs requiring lots of small writes then
raid5/6 are generally a mistake... raid10 is the only way to get
performance. (hw raid5/6 with nvram support can help a bit in this area,
but you just can't beat raid10 if you need lots of writes/s.)
Robin Bowes wrote:
Bill Davidsen wrote:
There have been several recent threads on the list regarding software
RAID-5 performance. The reference might be updated to reflect the poor
write performance of RAID-5 until/unless significant tuning is done.
Read that as tuning obscure parameters and
Bill Davidsen wrote:
Robin Bowes wrote:
Bill Davidsen wrote:
There have been several recent threads on the list regarding software
RAID-5 performance. The reference might be updated to reflect the poor
write performance of RAID-5 until/unless significant tuning is done.
Read that as
Robin Bowes wrote:
Bill Davidsen wrote:
Robin Bowes wrote:
Bill Davidsen wrote:
There have been several recent threads on the list regarding software
RAID-5 performance. The reference might be updated to reflect the poor
write performance of RAID-5 until/unless significant
On 15 Jan 2007, Bill Davidsen told this:
Nix wrote:
Number Major Minor RaidDevice State
0 860 active sync /dev/sda6
1 8 221 active sync /dev/sdb6
3 2252 active sync /dev/hdc5
On 14 Jan 2007, Neil Brown told this:
A quick look suggests that the following patch might make a
difference, but there is more to it than that. I think there are
subtle differences due to the use of version-1 superblocks. That
might be just another one-line change, but I want to make sure
On Mon, 15 Jan 2007, Robin Bowes wrote:
I'm running RAID6 instead of RAID5+1 - I've had a couple of instances
where a drive has failed in a RAID5+1 array and a second has failed
during the rebuild after the hot-spare had kicked in.
if the failures were read errors without losing the entire
On Mon, 15 Jan 2007, dean gaudet wrote:
you can also run monthly checks...
echo check /sys/block/mdX/md/sync_action
it'll read the entire array (parity included) and correct read errors as
they're discovered.
A-Ha ... I've not been keeping up with the list for a bit - what's the
minimum
dean gaudet wrote:
On Mon, 15 Jan 2007, Robin Bowes wrote:
I'm running RAID6 instead of RAID5+1 - I've had a couple of instances
where a drive has failed in a RAID5+1 array and a second has failed
during the rebuild after the hot-spare had kicked in.
if the failures were read errors
On Mon, 15 Jan 2007, berk walker wrote:
dean gaudet wrote:
echo check /sys/block/mdX/md/sync_action
it'll read the entire array (parity included) and correct read errors as
they're discovered.
Could I get a pointer as to how I can do this check in my FC5 [BLAG] system?
I can find
Hello Dean ,
On Mon, 15 Jan 2007, dean gaudet wrote:
...snip...
it should just be:
echo check /sys/block/mdX/md/sync_action
if you don't have a /sys/block/mdX/md/sync_action file then your kernel is
too old... or you don't have /sys mounted... (or you didn't replace X with
the raid
On Mon, 15 Jan 2007, Mr. James W. Laferriere wrote:
Hello Dean ,
On Mon, 15 Jan 2007, dean gaudet wrote:
...snip...
it should just be:
echo check /sys/block/mdX/md/sync_action
if you don't have a /sys/block/mdX/md/sync_action file then your kernel is
too old... or you
Hi!
I'm getting md: bug in file drivers/md/md.c, line 1652 (see below) after
writing data to a md-device using dd.
Is it really a bug or am I just using mdadm in the wrong way? I'm unsure
about the --assume-clean flag when creating the raid5 volume.
My kernel is 2.6.18.
Below are some
13 matches
Mail list logo