Hello,
in between I learned how to setup a raid1 from existing data. Now I have
one more question: If I decide some fine day that I need one disk of my
raid1 for something different that the mirror, how can I destroy the
raid1 and instead use one of the (old raid) disks as usual (non raid)
disk.
On Sun, 27 Feb 2000 23:22:19 -, you wrote:
Suse (at least until 6.3) comes with the old style raid (kernel and tools) and the
new tools.
The new tools won't work with old style raid, right?
You can do two things to determine what is being used:
- rpm -qi raidtool
If not installed they
in between I learned how to setup a raid1 from existing data. Now I have
one more question: If I decide some fine day that I need one disk of my
raid1 for something different that the mirror, how can I destroy the
raid1 and instead use one of the (old raid) disks as usual (non raid) disk.
I'm trying to set up a Raid 1 configuration on my linux box, and I always
get the same error, whatever the kernel I'm using.
I first tried with a Mandrake 6.1 (kernel 2.2.13-15 updated to some
version > 2.2.13-22), then with a Mandrake 7 (kernel 2.2.14).
My configuration is : PII 450, 128Mo
ais wrote:
I'm trying to set up a Raid 1 configuration on my linux box, and I
always get the same error, whatever the kernel I'm using.
I first tried with a Mandrake 6.1 (kernel 2.2.13-15 updated to some
version 2.2.13-22), then with a Mandrake 7 (kernel 2.2.14).
My configuration is
Steve Terrell wrote:
Rainer Krienke wrote:
Hello,
in between I learned how to setup a raid1 from existing data. Now I have
one more question:
---snip---
Maybe I missed a response, but perhaps you could share this with us?
--
Steve Terrell
Sr. Network Administrator
I have an existing RAID array autostarting at /dev/md0 that I would like
to move to a different md device number for my organizational peace of
mind as I add another array or two.
For instance, I'd like to make my md0 array autostart at md2 instead of
md0 and then create two new arrays at md0
Has anyone done any benchmarks with the Mylex ExtremeRAID 1100? I'm
planning on getting one of the 3 channel ones with 64mb cache. Initially,
it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
repurpose this as a Linux file server. It will have an external enclosure
On Thu, 2 Mar 2000 [EMAIL PROTECTED] wrote:
On Wed, 1 Mar 2000, Leon Brouwers wrote:
* DAC960 RAID Driver Version 2.2.4 of 23 August 1999 *
Copyright 1998-1999 by Leonard N. Zubkoff [EMAIL PROTECTED]
Configuring Mylex DAC1164P PCI RAID Controller
0:1 Vendor: WDIGTL
Title: FW: ExtremeRAID 1100 benchmarks
I had a 2 channel 1164 in a dual 450 PIII (256MB ram) with 4 18GB Seagate LVD 10K RPM drives in RAID 5. WIth all defaults, except 4K block size of the ext2 file system, I got about 22MB/sec reads and writes according to Bonnie. Best I remember, the 4K
I'm in the middle of testing this controller on an ES40 (4 CPU Alpha).
I should get some numbers next week. So far with a 4+p RAID 5 I'm
seeing about 17MB/s write performance with a single chain. I think
these are only 7200 RPM drive. I don't really care about read
performance but that was up
[ Thursday, March 2, 2000 ] Chris Mauritz wrote:
Has anyone done any benchmarks with the Mylex ExtremeRAID 1100? I'm
planning on getting one of the 3 channel ones with 64mb cache. Initially,
it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
repurpose this as a
On Thu, 2 Mar 2000, Christian Robottom Reis wrote:
How about running tiotest on them so we can have a look at some real
numbers - bonnie just isn't very meaningful. If you can run tiobench with
--numruns 5 or something close and a decent size (1024 looks fine) it's
more meaningful.
I think
I'm inclined to think this controller can do it with 4-5 spindles per
channel. I'm told the strongarm processor at 233mhz can really crank on
RAID 5 applications. Let us know how it works out.
Cheers,
Chris
- Original Message -
From: "Brian Pomerantz" [EMAIL PROTECTED]
To: [EMAIL
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 02, 2000 10:48 AM
To: Christian Robottom Reis
Cc: Leon Brouwers; Leonard N. Zubkoff; [EMAIL PROTECTED]
Subject: Re: Benchmark 1 [Mylex DAC960PG / 2.2.12-20 / P3]
On Thu, 2 Mar 2000,
On Thu, 2 Mar 2000, Gregory Leblanc wrote:
Perhaps they got reset when syslog cycled at midnight, or something. :)
Try comparing the number of resets to the number of reads/writes, more like
you would for ethernet collisions. I think that resets will only indicate a
problem when the number
# tiobench.pl --numruns 5 --size 1024
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are Seeks/sec
Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%)
- -- --- - -- --
.1024 40961 25.6001
[I tried to apply the low-latency patch as well, but it panicked all over
the place and I gave up.]
Same setup, notice the throughput and seek improvement over the last one
with numthreads 1. The difference the block size makes over throughput
and seek grows a bit. Chunk doesn't do much. I
I've run a couple of benchmarks using different numruns on the same
hardware, calculated the variations and deviations, and found that numruns
should be 4 or more on my current setup [P3, linux-2.2.14-Raid1, mem=16M,
runlevel 1, single AHA2940UW1, two Quantum Atlas IV] to keep variance down
Same setup, but with kernel 2.2.14 with the vanilla RAID patch from
~mingo. Not much difference at all from the former - notice I've run it
with a couple of extra threads (16,32,64), and more threads get more seeks
(but worse throughput).
Chunk is 4k, Stride is 4 and Block is 1024k
Machine
this is likely a problem that has been solved many times before, so
I'm hoping there's a FAQ with the answer somewhere. If not,
I've just hit the first implementation snag with raidtools-0.90 on a
slackware 7 kernel 2.2.13.
I've re-compiled the kernel to have md support at RAID-1 included
Derek,
You need to patch your kernel and re-compile it.
If you downloaded the 0.90 raidtools, you need to patch the kernel. It was
designed to run the 0.4X raidtools.
At 07:11 PM 3/2/00, Derek Shaw wrote:
this is likely a problem that has been solved many times before, so
I'm hoping there's
[ Thursday, March 2, 2000 ] Derek Shaw wrote:
I've re-compiled the kernel to have md support at RAID-1 included in
ftp.fi.kernel.org/pub/linux/daemons/raid/alpha/
fetch the patch (raid0145) for kernel 2.2.11 and apply it to your
kernel source (since you said 2.2.13) and ignore rejects
If you
I had a look at the numbers you got on the benchmark you posted, and I've
tried averaging out the values to see if -Rstripe made any difference.
FWIW, it seems there's a small (but existant, IMHO) improvement on reads
and not much change on writes. Reads seemed more improved running 2+
threads,
What program do I use for benchmarking?
gary hostetler
On Thu, 2 Mar 2000, Leonard N. Zubkoff wrote:
An occasional reset is not a problem; it simply means a command timed
out and the DAC960 firmware responds by resetting the bus and retrying
all the pending commands. In the case of the IBM drives, there is a
mode page setting that controls how
26 matches
Mail list logo