- Original Message -
From: Neil Brown [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Thursday, December 22, 2005 5:46 AM
Subject: Re: RAID5 resync question BUGREPORT!
On Monday December 19, [EMAIL PROTECTED] wrote:
- Original Message -
Hi,
I was interested in Linux's RAID capabilities and
read that mdadm was the tool of choice. We are
currently comparing software RAID with hardware RAID
and to complete our comparison, we were wondering if
the following is supported by mdadm:
1) OCE: Online Capacity Expansion:
Rik Herrin wrote:
I was interested in Linux's RAID capabilities and
read that mdadm was the tool of choice. We are
currently comparing software RAID with hardware RAID
MD is far superior to most of the hardware RAID solutions I've touched.
In short, it seems MD is developed with the goal of
Sebastian Kuzminsky wrote:
Andrew Burgess [EMAIL PROTECTED] wrote:
I'm seeing hard system lockups with 2.6.15-rc5 when trying to use a
RAID-6 array as a PV for LVM2.
I've got four SATA disks hanging off a Marvell 6081 controller. The disks
work great when I access them raw (without
Brad Campbell wrote:
Callahan, Tom wrote:
It is always wise to build in a spare however, that being said about all
raid levels. In your configuration, if a disk fails in your RAID5, your
array will go down. RAID5 is usually 3+ disks, with a mirror. So you
should
have 3 disks at minimum, and
On Thu, 22 Dec 2005, Bill Davidsen wrote:
If you are seeing dual drive failures, I suspect your hardware has problems.
We run multiple 3 and 6 TB databases, and over a dozen 1 TB data caching
servers, all using a lot of small fast disk, and I haven't seen a real dual
drive failure in about 8
Andargor The Wise wrote:
Yet another thing, someone has suggested that I should
increase the chunk size for my RAID5 from 32 to either
64 or 128.
Is it worth it, considering that the system doesn't
normally run on a heavy load? Mail for a few users,
some read-only database applications,
--- Bill Davidsen [EMAIL PROTECTED] wrote:
Andargor The Wise wrote:
Yet another thing, someone has suggested that I
should
increase the chunk size for my RAID5 from 32 to
either
64 or 128.
Is it worth it, considering that the system doesn't
normally run on a heavy load? Mail for a
Hi,
On Thu, 22 Dec 2005, Rik Herrin wrote:
1) OCE: Online Capacity Expansion: From the latest
version of mdadm (v2.2), it ssems that there is
support for it with the -G option. How well tested is
Well, we use LVM above raid, thins is not a big issue.
3) Performance issues:
Lajber Zoltan wrote:
I have some simple test with bonnie++, the sw raid superior to hw raid,
except big-name storage systems.
http://zeus.gau.hu/~lajbi/diskbenchmarks.txt
Cool.
But what does gep, tip, diskvez, iras, olvasas and atlag mean?
-
To unsubscribe from this list: send the line
On Thu, 22 Dec 2005, Molle Bestefich wrote:
Lajber Zoltan wrote:
I have some simple test with bonnie++, the sw raid superior to hw raid,
except big-name storage systems.
http://zeus.gau.hu/~lajbi/diskbenchmarks.txt
Cool.
But what does gep, tip, diskvez, iras, olvasas and atlag mean?
Since I'll be recreating the array anyway, I might as
well split /, /home, and /var into three RAID5's.
Consider LVM2 which allows you to change the sizes of those
three partitions.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
Andrew Burgess [EMAIL PROTECTED] wrote:
I'm seeing hard system lockups with 2.6.15-rc5 when trying to use a
RAID-6 array as a PV for LVM2.
I've got four SATA disks hanging off a Marvell 6081 controller. The disks
work great when I access them raw (without going through md or dm).
Nice
Andrew == Andrew Burgess [EMAIL PROTECTED] writes:
3) Performance issues: I'm currently thinking of
using either RAID 10 or LVM2 with RAID 5 to serve as a
RAID server.
Andrew I think you always want LVM2 between raid and the
Andrew filesystem. Not only can you expand things but you can
sorry if this is already known/fixed: Assemble() is called from mdadm.c with
the update argument equal to NULL:
Assemble(ss, array_list-devname, mdfd, array_list, configfile,
NULL, readonly, runstop, NULL, verbose-quiet, force);
But in Assemble.c we have
if
Hi Rik,
Neil answered some of the questions earlier:
How can I know if the kernel I am using supports this
reconfiguration? What if I'm compiling the kernel by
hand. What options would I have to enable?
PS. Some of the terms used in the man page are a bit
ambiguous. For
On Thu, 22 Dec 2005, John Stoffel wrote:
I've been tempted by XFS at times, but I worry, esp since alot of
other people who are core developers don't care for XFS as much. But
maybe I'll move that way for my next system.
We use xfs since 2.6.x in production. Typical config for us:
/ in
17 matches
Mail list logo