It doesnt compile if you select it, but "boot with raid" works without
enabling it anyway.
> I also noticed that the "boot with raid" option in the kernel won't compile
> properly in the 2.3.4X series.
>
> thanks,
>
> karl
[ Friday, March 3, 2000 ] James Manning wrote:
> [ Friday, March 3, 2000 ] Sander Flobbe wrote:
> > In my kernel I did only include the module for raid-1. Then, when I try
> > to create a raid-5 system it doesn't work:
> >
> > Okay, okay, my fault... but a tiny little cute hint about my mistake
[ Friday, March 3, 2000 ] Karl Czajkowski wrote:
> > how much memory in the machine?
>
> 256 MB
> dual 550 MHz pentium III
>
> I did read other larger-than-memory files in between tests to try and
> avoid caching effects.
barely larger than memory doesn't count.
It's easily argued that 2x mem
[ Friday, March 3, 2000 ] Karl Czajkowski wrote:
> I upgraded the kernel to 2.3.47, 48, and 49 and got a performance
> problem where "time cat file ... > /dev/null" for a 300 MB file shows
> some scaling, but for a 600 MB file the throughput is almost identical to
> a single disk.
how much memor
> how much memory in the machine?
256 MB
dual 550 MHz pentium III
I did read other larger-than-memory files in between tests to try and
avoid caching effects.
karl
[ Friday, March 3, 2000 ] Ricky Beam wrote:
> As I understand it, the "stride" will only make a real difference for
> fsck by ordering data so it's (more) evenly spread over the array. This
> sounds correct and even "looks" correct when observing the array -- but
> I've never bothered to look at
[ Friday, March 3, 2000 ] Sander Flobbe wrote:
> In my kernel I did only include the module for raid-1. Then, when I try
> to create a raid-5 system it doesn't work:
>
> Okay, okay, my fault... but a tiny little cute hint about my mistake
> from mkraid would be nice, wouldn't it? :*)
also nice
I installed redhat 6.1 on a machine with two 50 GB disks, and created
a large raid0 scratch space across them. simple performance measurements,
consisting of "time cat file ... > /dev/null" showed great, near-perfect
performance scaling: 180 Mb/s for one disk and 350 Mb/s for two.
I upgraded th
i just compiled and ran iozone (make linux) it was purely command line,
i havent worked out how to get the graphics yet.
James Manning wrote:
>
> [ Friday, March 3, 2000 ] bug1 wrote:
> > there are a few benchmark progs arround
> >
> > bonnie:old benchmark program
> > bonnie++ :updated bo
Hi,
In my kernel I did only include the module for raid-1. Then, when I try
to create a raid-5 system it doesn't work:
[flobbe@pio flobbe]# mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/hdd2, 504000kB, raid superblock at 503936kB
disk 1: /dev/hdd3, 504000kB, raid
On Thu, 2 Mar 2000, Christian Robottom Reis wrote:
>tried averaging out the values to see if -Rstripe made any difference.
...
As I understand it, the "stride" will only make a real difference for
fsck by ordering data so it's (more) evenly spread over the array. This
sounds correct and even "lo
[ Friday, March 3, 2000 ] Steve Terrell wrote:
> I have been using raid1 0.090-5 (kernel 2.2.14 w/ raid patch) on a
> couple of RedHat 6.1 boxes for several weeks with good results.
> Naturally, when I installed it on a production system, I ran into
> problems. Raid1 arrays work fine - after the
I have been using raid1 0.090-5 (kernel 2.2.14 w/ raid patch) on a
couple of RedHat 6.1 boxes for several weeks with good results.
Naturally, when I installed it on a production system, I ran into
problems. Raid1 arrays work fine - after the machine (Redhat 6.0 kernel
2.2.14 w/patch) is up and run
Hi
a box i have crashed this morning, the problem is that :
# raidstart -a
# cat /proc/kmsg
<4>(read) hdb1's sb offset: 15016576 [events: 004a]
<4>(read) hde1's sb offset: 15016576 [events: 004a]
<4>(read) hdf1's sb offset: 15016576 [events: 004a]
<4>(read) hdg1's sb offset: 1501657
On Fri, 3 Mar 2000 [EMAIL PROTECTED] wrote:
>Normally, I'd look for such things via the GUI scsi-config interface. Is
>there any way to access the mode pages for each disk while they're
>connected to the Mylex controller, or would I need to hook each drive to a
>traditional SCSI controller to do
[ Friday, March 3, 2000 ] bug1 wrote:
> there are a few benchmark progs arround
>
> bonnie:old benchmark program
> bonnie++ :updated bonnie to reflect modern hardware
> tiotest :looks promising, still being developed
> iozone :havent tried this, but www.iozone.org shows it
Johan,
Thanks for sending the bulk information about this bug. I have never seen the buffer
bug
when running local loads, only when using nfs. The bug appears more often when running
with 64MB of RAM or less, but has been seen when using more memory.
Below is a sample of the errors seen while d
[EMAIL PROTECTED] wrote:
>
> What program do I use for benchmarking?
> gary hostetler
there are a few benchmark progs arround
bonnie:old benchmark program
bonnie++ :updated bonnie to reflect modern hardware
tiotest :looks promising, still being developed
iozone:havent tried this, but
At 09:36 03.03.00, Thomas Rottler wrote:
>I was told several times now to use LILO with the Redhat patches.
>But my favorite distribution was/is/will be Debian, and I didn't find these
>patches anywhere on the net..
You can get the patch from ftp://ftp.sime.com/pub/linux/lilo.raid1.gz
Bye, Marti
Hi!
I was told several times now to use LILO with the Redhat patches.
But my favorite distribution was/is/will be Debian, and I didn't find these
patches anywhere on the net..
So please post them to the list or directly to me (if they're too large)..
THX!
Thomas
20 matches
Mail list logo