did you patch the kernel 2.2.16 with the raid patch?
take a look at the file /proc/mdstat
if that file has the word 'inactive' in it, then you need to patch your
kernel.
look at www.redhat.com/~mingo/
for patches.
allan
Jordan Wilson [EMAIL PROTECTED] said:
I have a few problems regarding
-Original Message-
From: Jordan Wilson [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 12, 2000 12:16 PM
To: [EMAIL PROTECTED]
Subject: RAID0 problems
I have a few problems regarding my software RAID0 solution.
I have two
disks, hdb and hdd, on a raid0 array. Everything was
When running Bonnie, you should always set the file size to 3-4
times the size of your RAM, else you get the 200Mb /sec speeds
(which are very pleasant, but not realistic). I think the 100% CPU is
in great part Bonnie generating the test files. I've tried copying files,
this takes almost no
"Kent" == Kent Nilsen [EMAIL PROTECTED] writes:
Kent I've got exactly the same problem on a Mylex hardware RAID-
Kent controller, writing is nearly twice as fast as reading. The
Kent drives are Barracuda 50Gb drives, the controller is a
Kent DAC1164P. I use the latest firmware, and latest
On Tue, 14 Mar 2000, Scott M. Ransom wrote:
Hello,
I have just set up RAID0 with two 30G DiamondMax (Maxtor) ATA-66 drives
connected to a Promise Ultra66 controller.
I am using raid 0.90 in kernel 2.3.51 on a dual PII-450 with 256M RAM.
Here are the results from bonnie:
Jakob Østergaard wrote:
Someone (Probably Andre Hedrick, or perhaps Andrea Arcangali -- sorry guys, I
don't recall) explained this on LKML. Out of my memory it has something to do
with ATA modes and the kernel configuration. You haven't enabled ``Generic
busmaster support'', or perhaps one
Kent Nilsen wrote:
Do you by any chance have problems with the entire system
freezing after a while or during lots of activity?
Only freezes I have seen seem to be coming from Netscape occasionally
hogging all available resources (I think).
But I will look more carefully in the future...
I've got exactly the same problem on a Mylex hardware RAID-
controller, writing is nearly twice as fast as reading. The drives are
Barracuda 50Gb drives, the controller is a DAC1164P. I use the
latest firmware, and latest drivers from dandelion.com. Kernel
version is 2.2.14,
[ Sunday, February 13, 2000 ] James Manning wrote:
I'm going to try adding a --numruns flag for tiobench so we can have an
automated facility for averaging over a number of runs. I believe the
dip at 4 threads is real, but it's worth adding anyway :)
It'll be part of tiotest 0.23, but
[ Saturday, February 12, 2000 ] Peter Palfrader aka Weasel wrote:
So, I finally found time to try the new RAID stuff and speed
increased :)
Excellent.
I also tried RAID1 with and without the read-balancing patch:
The filesystem was always made with a simple "mke2fs dev":
-Rstripe= could be
At 00:32 12.02.00 -0800, smart wrote:
For this application, space is more important that hard drive failures,
so I've configured it as one large raid0 array, giving me a 160Gb.
Here are the performance stats using hdparm (and I humbly admit that I don't
even know if this is the right way to
[ Saturday, February 12, 2000 ] Martin Bene wrote:
Try the tests again with a test like tiotest; make sure the size of your
testfiles is at least double your physical RAM.
If size isn't specified, I have it defaulting to 2x size of /proc/kcore,
bracketed at 200 and 2000 MB :)
I was hoping to
[ Wednesday, January 12, 2000 ] Andre Cruz wrote:
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
Which kernel? which raid patch? which raidtools?
James
--
Miscellaneous Engineer --- IBM Netfinity Performance Development
Drop the latest stuff from
ftp://ftp.*.kernel.org/pub/linux/kernel/people/hedrick/
and see if that helps.
On Tue, 14 Dec 1999 [EMAIL PROTECTED] wrote:
On Tue, 14 Dec 1999, Andrea Arcangeli wrote:
I just fixed this. it's due raid colliding with 2.2.14pre12.
Apply this patch on the
On Tue, 14 Dec 1999, Andrea Arcangeli wrote:
I just fixed this. it's due raid colliding with 2.2.14pre12.
Apply this patch on the top of your current tree:
ftp://ftp.*.kernel.org/pub/linux/kernel/people/andrea/patches/v2.2/2.2.14pre11/set_blocksize-1-raid0145-19990824-2.2.11.gz
T just tried making the same change on my system to see if it would help
me. But the symptons stayed the same. If a drive is attached to IDE3 or
IDE4 channels, the system locks up during bootup. One difference I am
using the BE-6 motherboard.
Best Regards,
Robert Laughlin
On Wed, 8 Dec 1999,
Thanks for writing back, Michael. Yes it complied cleanly, and I have a
short script that copies over the new kernal and runs lilo for me, so I
don't have to remember all the steps...:)
I have written Andre a number of times providing him with details about my
problem, I have also added some
Interesting. I never had lockups during boot, only during heavy IDE load.
Just a stupid question: you did make sure that the change was cleanly
compiled in and installed and all? I assume you probably did, but I've
missed steps before when not really watching what I was doing and sat
On Wed, 8 Dec 1999, Michael Trainor wrote:
I've got a Dual Celeron 466 system (Abit BP6) running four maxtor IDE
hard drives on the ATA66 controller (using ATA33 for all four drives).
I have the same box with PIIX4 onboard and extra HPT ata66 controller; 4
identical bigfoots, raid0.
On Wed, 8 Dec 1999, Michael Trainor wrote:
I've got a Dual Celeron 466 system (Abit BP6) running four maxtor IDE
hard drives on the ATA66 controller (using ATA33 for all four drives).
I have the same box with PIIX4 onboard and extra HPT ata66 controller; 4
identical bigfoots, raid0.
On Wed, 8 Dec 1999, Michael Trainor wrote:
In any event, I ended up browsing through the code (experienced programmer,
linux kernel newbie) and found that I could disable Ultra/66 by changing
#DEFINE HPT366_ALLOW_ATA66_4 1 to #DEFINE HPT366_ALLOW_ATA66_4 0
Once I did that (and rebuilt and
Loose my head next time... really...
offcourse i meant with raid0
At 13:51 24/10/99 +0200, Kelina wrote:
Hi all,
i just had a crash with software raid 1, the power of 1 of the 2 raid hd's
came loose during bootup. It resulted in me having to reboot. Then it checked
the hd, found that an inode
On Sun, Oct 24, 1999 at 03:11:53PM +0200, Kelina wrote:
Loose my head next time... really...
Deal :)Ok, you had me wondering there...
offcourse i meant with raid0
No, if you have a bad block that the disk can't read, no software in the
world is going to make the data come back.
(if
kmod: failed to exec /sbin/modprobe -s -k md-personality-2, errno = 2
This is your problem. The standard RedHat initrd doesn't include raid support.
Auto-detection of raid arrays occurs before any filesystems are mounted.
Therefore, the raid module you need to run the array is not available.
On Fri, 8 Oct 1999, Pat Heath wrote:
Hi,
I was working on doing the autostart for raid0 and the part where it says:
3. The partition-types of the devices used in the RAID must be set to
0xFD (use fdisk and set the type to ``fd'')
is confusing because I don't have a
You're OK...there's a newer version of fdisk which recognizes this new
partition type.
Michael D. Black Principal Engineer
[EMAIL PROTECTED] 407-676-2923,x203
http://www.csi.cc Computer Science Innovations
http://www.csi.cc/~mike My home page
FAX
On Fri, Oct 08, 1999 at 08:28:39AM -0700, Pat Heath wrote:
I was working on doing the autostart for raid0 and the part where it says:
3. The partition-types of the devices used in the RAID must be set to
0xFD (use fdisk and set the type to ``fd'')
is confusing because I don't
Title: RE: Raid0
Did you use persistent superblocks in your raidtab?
Clay
-Original Message-
From: Pat Heath [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 08, 1999 9:50 AM
To: [EMAIL PROTECTED]
Subject: Raid0
I was working on doing the autostart for raid0 and the part where
[EMAIL PROTECTED] wrote:
is there somewhere a patch ?
i can't find a ac patch (like in 2.2.11)
and in 2.2.13pre4 there is no support also
I applied the latest raid patch raid0145-19990824-2.2.11.bz2 to 2.2.12,
and it applied cleanly except for one header file, which is already
patched. Raid
Marc SCHAEFER [EMAIL PROTECTED] wrote:
I am going to try with raw IO (with sct patch) to see if it's the
memcpy_to_fs() (or 2.x equivalent) which is responsible for the slow down.
I am also going to try with two QLOGIC ISP1080 since they seem even faster
than the AIC7895 which was already
On 31 Aug 1999, Marc SCHAEFER wrote:
Now, I just changed to have the 4 disks on the QLOGIC 1080 (U2/LVD),
then 4 (2 each for each aic7xxx)
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block---
Marc SCHAEFER [EMAIL PROTECTED] wrote:
Now, RAID5 on the same 7 disk set:
---Sequential Output ---Sequential Input--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
On Thu, 2 Sep 1999, Helge Hafting wrote:
It would be interesting to
check out the very same benchmarks with an identical but higher-clocked
CPU, to see how much the saturation point depends on CPU speed. (this
might not be possible with your system i guess)
If overclocking isn't an
Hi,
On Thu, 29 Jul 1999 09:38:20 -0700, Carlos Hwa [EMAIL PROTECTED]
said:
I have a 2 disk raid0 with 32k chunk size using raidtools 0.90 beta10
right now, and have applied stephen tweedie's raw i/o patch. the raw io
patch works fine with a single disk but if i try to use raw io on
/dev/md0
Well, i just find why the import is so slow. Oracle8i linked against glibc2.1 is
3 times slower than Oracle 8.0.5 with glibc2.0; so the RAID0 has no influence
because the time of reference is for Oracle 8.0.5+glibc2.0. I have installed the
old version of Oracle and then build the RAID0, and now
O.K., I was trying to understand why the performance for Oracle was poor
with RAID0. With no raid and 8 files 2GB each, across 4 SCSI disks and two
scsi hosts; an import of 1GB + analyze runs for 4 hours. With raid0 i killed
the process after 8 hours.
The machine is a HP LH4, two Xeon
Jan Edler [EMAIL PROTECTED] writes:
Sustained? How are you measuring? The ST317242A's are rated at a
fairly typical 8.5 MBytes/sec sustained. Your numbers are pretty good.
measurements is a lack of repeatability. I see about 10% variation from
run to run. For the ST317242A, Seagate
Osma Ahvenlampi wrote:
Jan Edler [EMAIL PROTECTED] writes:
Sustained? How are you measuring? The ST317242A's are rated at a
fairly typical 8.5 MBytes/sec sustained. Your numbers are pretty good.
measurements is a lack of repeatability. I see about 10% variation from
run to run.
now I
will ask them for case studies later.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Tim Walberg
Sent: Friday, July 30, 1999 8:20 AM
To: [EMAIL PROTECTED]
Subject: Re: raid0 vs. raid5 read performance
On 07/30/1999 07:17 -0700, Roeland M.J
On 07/30/1999 08:51 -0700, Roeland M.J. Meyer wrote:
Then what I get from this is that the fundimental unit of measure is
kilo-bytes (KB/1024 bytes)?
Further, that I will have to write up various cases? Okay, I'll do it.
Case for normal generic file system usage and a case
On 07/30/1999 09:34 -0700, Roeland M.J. Meyer wrote:
Actually, it might be useful to consider several different
cases (you mentioned
1 and 4, but there are a couple other common cases):
1) RDBMS raw block device usage
2) small-file file system (i.e.
From: Tim Walberg [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 30, 1999 9:58 AM
On 07/30/1999 09:34 -0700, Roeland M.J. Meyer wrote:
how the RDBMS is implemented. However, I don't think
anyone does RAW
Don't know about under Linux, but I know of a number of sites
still using
raw
I don't buy this; the atime updates should be subject to caching,
and not get written to the disk more than the update daemon
(kflushd or whatever) forces.
Jan Edler
NEC Research Institute
On Thu, Jul 29, 1999 at 09:20:15AM -0500, Tim Walberg wrote:
For pure reads, there should be no
On 07/29/1999 11:18 -0400, Jan Edler wrote:
I don't buy this; the atime updates should be subject to caching,
and not get written to the disk more than the update daemon
(kflushd or whatever) forces.
True, if there are a small number of accesses, but I have seen
many
On 07/29/1999 10:24 -0700, Lance Robinson wrote:
AFAIK: RAID-5 accesses are always in stripes. All disks are read (or
written) no matter how small the original read/write request. Whereas, RAID0
can read just one disk for smaller requests. RAID5 does a lot more work for
-
From: Ingo Molnar [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 18, 1999 5:05 AM
To: Robert McPeak
Cc: [EMAIL PROTECTED]
Subject: Re: RAID0 and RedHat 6.0
On Mon, 17 May 1999, Robert McPeak wrote:
Here are the relevant messages from dmesg:
hdd1's event counter: 000c
hdb1's event counter
Following up to 2 seperate posts,
1. Red Hat 6 DOES come with a kernel patched for RAID. It DOES support
RAID autostart. In addition, it includes older script files for bringing
up older RAID devices (you need to have the older raidtools package
installed from a previous installation).
2.
During the init scripts, it does try to start and mount the RAID, but fails.
My guess is that you have raid0 as a loadable module -- is that so ?
You can check using (on a machine with an active md) something like:
% grep raid /proc/modules
raid1 6080 1
On Tue, 18 May 1999, Ingo Molnar wrote:
Date: Tue, 18 May 1999 14:05:15 +0200 (CEST)
From: Ingo Molnar [EMAIL PROTECTED]
To: Robert McPeak [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: RAID0 and RedHat 6.0
On Mon, 17 May 1999, Robert McPeak wrote:
Here are the relevant
Unless Red Hat have applied special patches to their distribution
kernel, RH 6.0 does not support RAID autostart. You should install the
latest RAID patches from
They have and it does.
Brian Murphy
On Mon, 17 May 1999, Robert McPeak wrote:
Here are the relevant messages from dmesg:
hdd1's event counter: 000c
hdb1's event counter: 000c
request_module[md-personality-2]: Root fs not mounted
do_md_run() returned -22
hm, this is the problem, it tries to load the RAID personality
"Robert McPeak" [EMAIL PROTECTED] writes:
I just installed RedHat 6.0, which appears to have the 0.90 version of the
raidtools installed by default. My boot disk is separate from the RAID. I
created a RAID0 spanning two 9gb drives, and it works fine, as long as I
manually go in and to a
On Thu, 29 Apr 1999, Tuomo Pyhala wrote:
I upgraded RH6.0 to one machine having raid0 created with some old
version of mdtools. However new code seems to be unable to start it
complaining about superblock magic. Has the superblock bee nchanged/Added
in newer versions making them
We had two IBM 6.4G IDE HDD's in a raid0 combo, thus making a 12G ext2
partition. Just yesterday one of
the drives died (it's making some very weird sounds ;( ) and I'm wandering
if there's any way of restoring the
data off the first haddrive which seems to be working ok still.
Not unless
On Tue, 12 Jan 1999, M.H.VanLeeuwen wrote:
#swapoff -a
#dd if=/dev/zero of=swapfile bs=1k count=1
#mkswap swapfile
#losetup /dev/loop3 swapfile
#swapon /dev/loop3
#free
total used free sharedbuffers cached
Mem:144044 141608 2436
Louis Mandelstam wrote:
On Tue, 12 Jan 1999, M.H.VanLeeuwen wrote:
#swapoff -a
#dd if=/dev/zero of=swapfile bs=1k count=1
#mkswap swapfile
#losetup /dev/loop3 swapfile
#swapon /dev/loop3
#free
total used free sharedbuffers cached
Mem:
On Tue, 12 Jan 1999, Jorge Nerin wrote:
I want to setup a raid0 stripped swap partition in an old 386 with 2
hd. It has 2.2.0-pre1, and raidtools-0.90, raid0 is a module and its
loaded when trying to do this.
IMHO there's no reason for using raid0 (striped) partition for swap.
If you use
On Tue, 12 Jan 1999, Bohumil Chalupa wrote:
IMHO there's no reason for using raid0 (striped) partition for swap.
If you use two swap partitions with equal priority, the kernel does
the striping automatically.
Another reason why NOT to use ANY RAID device for swap is that
it may allocate
On Tue, 12 Jan 1999, Louis Mandelstam wrote:
In fact it's quite simple: the md device doesn't currently support swap
partitions (or swapping to files on an md device).
it's quite simple: it should work just fine, if not then it's a bug. (i've
tested it and it works, but YMMV, bug reports
On Tue, 12 Jan 1999, MOLNAR Ingo wrote:
In fact it's quite simple: the md device doesn't currently support swap
partitions (or swapping to files on an md device).
it's quite simple: it should work just fine, if not then it's a bug. (i've
tested it and it works, but YMMV, bug reports
In fact it's quite simple: the md device doesn't currently support swap
partitions (or swapping to files on an md device).
Haven't tried it myself, but I've had two different reports that swap on RAID-1
works, from people who didn't realise that it _shouldn't_ work. I encouraged them to
post
On Tue, 12 Jan 1999, Bruno Prior wrote:
Haven't tried it myself, but I've had two different reports that swap on RAID-1
works, from people who didn't realise that it _shouldn't_ work. I encouraged them to
post their experiences to the list, but I don't think either of them did. Could it be
here is what i've tried on 2.0.36 on a raid 5 file system to
show it can be done, but I don't normally run this way because
of comments about locking up if resources are unavailable
#swapoff -a
#dd if=/dev/zero of=swapfile bs=1k count=1
#mkswap swapfile
#losetup /dev/loop3 swapfile
#swapon
63 matches
Mail list logo