--- Paul Aviles [EMAIL PROTECTED] wrote:
Are SATA drives similar in performance than IDE
drives? I have tested
Barracudas 7200.0 (500Gb) and WD too on the same
type of servers (more than
1 unit) and what I am getting is painfully slow in
terms of read/writes.
Anyone out there with
[EMAIL PROTECTED] wrote:
Thanks for the reply.
On Mon, Jan 02, 2006 at 11:49:14PM -0500, Ross Vandegrift wrote:
I just began using RAID-1 (in 2.6.12) on a pair of SATA drives, and
now the hard disk drive light comes on during booting--about when the
RAID system is loaded--and stays on.
I am checking raid5 performance.
I am using asynchronous ios with buffer size as the stripe size.
In this case i am using a stripe size of 1M with 2+1 disks.
Unlike raid0 , raid5 drops the performance by 50% .
Why ?
Is it because it does parity checkings ?
thank you
--
Raz
-
To unsubscribe from
Matt Darcy wrote:
Hello all,
I am still persisting with my quest for a usable sata_mv driver.
The 2.5.15-rc5-m3 kernel appear to have been good to me.
Before I attempt moving to later releases of the 2.6.15 tree I thought
I'd get feedback from the people in the know
This is an intentional
mdadm-2.2/super1.c line 384:
if (strcmp(update, uuid) == 0)
memcmp(sb-set_uuid, info-uuid, 16);
should probably be memcpy
--
Roger
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Supposedly current versions of EVMS and LVM sit overtop of the same
device mapper kernel component... Given this, I have to wonder if the
situation hasn't changed since last evaluated.
On Tue, 2006-01-03 at 00:36 -0800, Andargor The Wise wrote:
--- Paul Aviles [EMAIL PROTECTED] wrote:
Are
Interestingly, I was just browsing this paper
http://www.cs.utk.edu/%7Eplank/plank/papers/CS-05-569.html which appears
to be quite on-topic for this discussion. I admit my eyes glaze over
during intensive math discussions but it appears tuned RS might not be
as horrible as you'd think since
On Tue, 3 Jan 2006, Mark Hahn wrote:
has anyone gotten good performance from aacraid?
if you configure, say the 21610SA controller as JBOD and use
normal Linux MD raid, can you mitigate the pain?
I have an ibm xseries 206 with aacraid:
:03:02.0 RAID bus controller: Adaptec AAC-RAID (rev
Lorac First I can't start the array because it complains about a bad
Lorac superblock.
What's the exact error you get here? And the version of mdadm that
you're using? What's the output of 'cat /proc/mdstat' and 'mdadm
--detail /dev/md?' where ? is the number of your raid 5 array?
Lorac
Seatools is a DOS based tool. It doesn't matter what OS you have. It
just examines the drives themselves, not the filesystem. It is used
to check if your drives are bad.
echo 20 /proc/sys/dev/raid/speed_limit_max
echo 2 /proc/sys/dev/raid/speed_limit_min
The max is
On Monday 02 January 2006 09:46 am, Czigola Gabor wrote:
Yes, but the spare disk is just logically spare, because it was in the
array just before everything got wrong. I mean the data is untouched on
it. If I force it (with editing the raw disk superblock) to get a normal
active disk, should
11 matches
Mail list logo