Re: FAQ

2000-08-03 Thread Marc Mutz

Gregory Leblanc wrote:
 
snip
2.4. How do I apply the patch to a kernel that I just downloaded from
ftp.kernel.org?
 
Put the downloaded kernel in /usr/src. Change to this directory, and
move any directory called linux to something else. Then, type tar
-Ixvf kernel-2.2.16.tar.bz2, replacing kernel-2.2.16.tar.bz2 with your
kernel. Then cd to /usr/src/linux, and run patch -p1  raid-2.2.16-A0.
Then compile the kernel as usual.

My tar cannot use bz2-compressed unless used with
--use-compress-program=bzip2. so that line sould probably read "bzcat
kernel-2.2.16.tar.bz2 | tar xf -". Also the only tar I saw that knows
bzip2 is slackware's and it uses the '-y' switch for that. I never saw
the '-I' switch for tar and my 'info tar' does not list it. Bottomline:
Your tar is too customized to be in a FAQ.

Marc


-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: speed and scaling

2000-07-18 Thread Marc Mutz

Dan Hollis wrote:
 
 On Sat, 15 Jul 2000, Marc Mutz wrote:
  Look, you are an the _very_ wrong track! You may have 6 or 7 PCI
  _slots_, but you have only _one_ bus, i.e. only 133MB/sec bandwidth for
  _all_ 6 or 7 devices. You will not get 90MB/sec real throughput with a
  bus bandwidth of 133MB/sec! And the x86 architecture's memory bandwidth
  is _tiny_ (BX chipset does one or two _dozen_ MB/sec random access, ie.
  12-24 MB/sec).
 
 No. BX does 180mbyte/sec (measured).
 K7 with Via KX133 does 262mbyte/sec (measured).
 

Read more carefully. I said _random_ access, not sequential.

 I'd like to get numbers for real alphas. The only alpha I was able to
 measure was Alphastation 200 4/233. A measly 71mbyte/sec on that piece of
 shit.
 

How old is that "shit" and what were the numbers then on x86?

   The alphas we have here have the same number of slots.
  But not only one bus. They typically have 3 slots/bus.
 
 There are multiple pci bus x86 motherboards. Generally found on systems
 with 6 slots. I have seen x86 motherboards with 3 PCI buses, interrupted

I'd like to see how the x86 memory subsystem can saturate three (or only
two) 533MB/sec 64/66 PCI busses and still have the bandwidth to compute
a 90MB/sec stream of data.

 but the most
 ive seen on alpha or sparc is 2.
 
 -Dan

I never denied that such beasts exist. I just wanted to point out that a
x86 machine with those mobos would come close in price to the alpha
solution.
I simply can't imagine that there are no alpha boxen with more than 2
PCI busses. If I had a faster internet connection now, I'd check the web
site of alpha-processor Inc.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)





Re: speed and scaling

2000-07-14 Thread Marc Mutz

Seth Vidal wrote:
 
  I'd try an alpha machine, with 66MHz-64bit PCI bus, and interleaved
  memory access, to improve memory bandwidth. It costs around $1
  with 512MB of RAM, see SWT (or STW) or Microway. This cost is
  small compared to the disks.
 The alpha comes with other headaches I'd rather not involve myself with -
 in addition the costs of the disks is trivial - 7 75gig scsi's @$1k each
 is only $7k - and the machine housing the machines also needs to be one
 which will do some of the processing - and all of their code is X86 - so
 I'm hesistant to suggest alphas for this.
 

Look at the reality! If you have to do this sort of thing, x86 will give
you headaches. Normal 133MhZx32Bit PCI is _way_ too slow for that
machine. The entire PCI bus cannot saturate _one_ Ultra160 SCSI
controller, let alone a GigEth card. Putting More than one into a box
and trying to use them concurrently will show you what good normal PCI
is for _really_ fast hardware. You sure want multiple PCI64x66Mhz busses
and _now_ look again at board prices for x86.

Also, if you do data analysis like setiathome does (i.e. mostly FP),
alphas blow away _any_ other microprocessor (setiathome work-unit in
less than an hour; my AMD K6-2 500 needs 18hrs!) Code can be re-compiled
and probably should.

  Another advantage of the alpha is that you have more PCI slots. I'd
  put 3 disks on each card, and use about 4 of them per machine. This
  should be enough to get you 500GB.
 More how - the current boards I'm working with have 6-7 pci slots - no
 ISA's at all.
 

Look, you are an the _very_ wrong track! You may have 6 or 7 PCI
_slots_, but you have only _one_ bus, i.e. only 133MB/sec bandwidth for
_all_ 6 or 7 devices. You will not get 90MB/sec real throughput with a
bus bandwidth of 133MB/sec! And the x86 architecture's memory bandwidth
is _tiny_ (BX chipset does one or two _dozen_ MB/sec random access, ie.
12-24 MB/sec).

 The alphas we have here have the same number of slots.

But not only one bus. They typically have 3 slots/bus.

 
  Might I also suggest a good UPS system? :-) Ah, and a journaling FS...
 
 the ups is a must  -the journaling filesystem is at issue too - In an
 ideal world there will be a Journaling File system that works correctly
 with sw raid :)
 
 -sv

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: RedHat 6.2 kernel-source-2.2.16-3.i386.rpm

2000-06-27 Thread Marc Mutz

Udo Jocher wrote:
 
snip
 Yes, this latest rpm kernel contains already the
 linux-2.2.16-raid-B2.patch
snip

What's that? 2.2.16-B2? It has not been announced, AFAIK. Where's the
difference between it and A0?

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




raid1: Oops with 2.2.16+raid-2.2.16-A0

2000-06-16 Thread Marc Mutz

Hi Ingo, everyone!

After deciding to make my now-/ my then-/boot and the two 50M partitions
I have not used until now into then-/ as raid1, I successfully mkraid'es
it with the following /etc/raidtab (snippet)

Q raiddev /dev/md0
Qraid-level  1
Qnr-raid-disks   2
Qpersistent-superblock   1
Qchunk-size  32k
Q
Qdevice  /dev/sda1
Qraid-disk   0
Qdevice  /dev/sdc1
Qraid-disk   1

(chunk-size is not necessary for raid 1, is it?)
I then created an ext2 fs on it:
Q mke2fs -b 4096 -i4096 /dev/md0
I then tried to simulate a disk-failure as I have never done so and will
not do so with the arrays that contain actual data. So before putting
anything else on /dev/md0, I decided to
Q raidsetfaulty /dev/md0 /dev/sdc1
(ok, /proc/mdstat shows [U_])
Q raidhotremove /dev/md0 /dev/sda1
(maybe ok, I didn't check the log yet; however, no error messages on the
command line)
Q raidhotadd /dev(md0 /dev/sdc1
(no errors on the command line)
(/proc/mdstst shows recovery started, but no progress, ETA increasig all
the time)
(/var/log/messages shows the attached messages, which seem to look ok
-until it oopses with nothing more in the logs.)


TIA
Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)


Jun 16 20:01:29 adam kernel: trying to hot-add sdc1 to md0 ... 
Jun 16 20:01:29 adam kernel: bindsdc1,2
Jun 16 20:01:29 adam kernel: RAID1 conf printout:
Jun 16 20:01:29 adam kernel:  --- wd:1 rd:2 nd:1
Jun 16 20:01:29 adam kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1
Jun 16 20:01:29 adam kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 2, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel: RAID1 conf printout:
Jun 16 20:01:29 adam kernel:  --- wd:1 rd:2 nd:2
Jun 16 20:01:29 adam kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1
Jun 16 20:01:29 adam kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 2, s:1, o:0, n:2 rd:2 us:1 dev:sdc1
Jun 16 20:01:29 adam kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel: md: updating md0 RAID superblock on device
Jun 16 20:01:29 adam kernel: sdc1 [events: 0003](write) sdc1's sb offset: 52352
Jun 16 20:01:29 adam kernel: sda1 [events: 0003](write) sda1's sb offset: 52352
Jun 16 20:01:29 adam kernel: .
Jun 16 20:01:29 adam kernel: md: recovery thread got woken up ...
Jun 16 20:01:29 adam kernel: md0: resyncing spare disk sdc1 to replace failed disk
Jun 16 20:01:29 adam kernel: RAID1 conf printout:
Jun 16 20:01:29 adam kernel:  --- wd:1 rd:2 nd:2
Jun 16 20:01:29 adam kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1
Jun 16 20:01:29 adam kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 2, s:1, o:0, n:2 rd:2 us:1 dev:sdc1
Jun 16 20:01:29 adam kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29 adam kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
Jun 16 20:01:29

Re: Patches for 2.2.16?

2000-06-13 Thread Marc Mutz

Theo Van Dinter wrote:
 
snip
 I wasn't sure if there was a 2.2.16 patch coming out soon, and I wasn't sure
 I wanted to install a "A0" patch ...  Any thoughts?
snip

Nothing to worry about letter-by-alphabetnumber is just mingos
versioning system for the patches he maintains. Has nothing to do with
alpha or so. If he just ported it to the new kernel w/o changing
anything else, this is the normal version one would expect, no?

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)





Re: Current raid driver for 2.3.42?

2000-04-07 Thread Marc Mutz

Thomas Stegbauer wrote:
 
snip
   oh sorry, when i do a "make menuconfig" there should be autodetect raid
   arrays
   raid1, raid4/5, in blockdevices multipledevices.
  
  Maybe that's not an _option_ anymore.
 
 
 but raid0 and linear is still there ??
 
I meant _autodetect_ is not an option anymore but built-in
functionality.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: Current raid driver for 2.3.42?

2000-04-05 Thread Marc Mutz

Thomas Stegbauer wrote:
 
   right now i downloaded 2.3.99pre3 but under
   /blockdevices/multipledevices i get only linear and raid0, but now
^
  what should that be? Kernel source? I have linear.c and raid{0,1,5}.c in
  /usr/src/Linux/2/3/99/pre3/drivers/block.
 thanx 4 the answer.
 
 oh sorry, when i do a "make menuconfig" there should be autodetect raid
 arrays
 raid1, raid4/5, in blockdevices multipledevices.
 
Maybe that's not an _option_ anymore.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: Current raid driver for 2.3.42?

2000-04-04 Thread Marc Mutz

Thomas Stegbauer wrote:
 
 
snip
 right now i downloaded 2.3.99pre3 but under
 /blockdevices/multipledevices i get only linear and raid0, but now
  ^
what should that be? Kernel source? I have linear.c and raid{0,1,5}.c in
/usr/src/Linux/2/3/99/pre3/drivers/block.

 autodetect and raid5 :-(
 
 made i any errors?
 
 greetings
 thomas stegbauer

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)





Re: Software RAID with kernel 2.2.14

2000-03-23 Thread Marc Mutz

"m.allan noah" wrote:
 
snip
 instead, then you are running a patched kernel, and your problem must lie
 elsewhere (try recompiling the raid tools from source)
snip

... and check if you installed the raidtools.rpm and not the
mdutils.rpm.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)





Re: newbie needs help

2000-03-22 Thread Marc Mutz

Wolfram Lassnig wrote:
 
 hallo
 I´m trying to get a RAID5 system running but i`ve got problems setting up the
 /dev/md. I´m using a SuSE 6.3,  Linux version 2.2.13 ([EMAIL PROTECTED]) (gcc 
version egcs-2.91.66
 19990314/Linux (egcs-1.1.2 release)) #1 Mon Nov 8 15:51:29 CET 1999 , and
 raidtools v.0.90.2 -Alpha, 27th february 1999.
 My problem is that i don`t know wether these 2 parts fit together.
 After booting the standard kernel there is a proc/mdstat .
 When I try :
 root@foo  mkraid --really-force /dev/md0
 DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
 handling MD device /dev/md0
 analyzing super-block
 disk 0: /dev/sda3, 8570677kB, raid superblock at 8570560kB
 disk 1: /dev/sdb3, 8570677kB, raid superblock at 8570560kB
 disk 2: /dev/sdc3, 8570677kB, raid superblock at 8570560kB
 mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
 
 Is it the wrong raidtool version, is it the wrong kernel patch (SuSE does not
 respond on my queries),

SuSE has still old-style raid in it's kernel, AFAIK. You have to patch
the kernel yourself and install raidtools.rpm (it looks as though you
have already done that). Leave your hands off mdtools.rpm!!

 is there a problem with my partition configuration (i.e.
 wich type has to be set on the partitions being used for RAID (ext2,fd ??))
 
snip

fd is correct, but it's the hexadecimal value of the partition type
byte, not something like ext2 or iso9660.
-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)





Re: Errors when trying to patch kernel

2000-03-06 Thread Marc Mutz

Slip wrote:
 
 Hi there,
 I downloaded the latest raid patch(raid0145-19990824-2.2.11.gz) and am
 having a bit of trouble patching it in Slackware 7.0. As stated in the
 Kernel HOWTO, this is how I attempted to patch:
 'zcat raidx.gz | patch -p0'
 
snip

Two things:

1.) Latest raid patch is not raid0145-19990824-2.2.11, but:
[EMAIL PROTECTED] wrote:
 
 The RAID patch is at: http://people.redhat.com/mingo/raid-2.2.14-B1

2.) You probably were not in /usr/src when you issued the above command.
Use patch this way:

$ cd /usr/src/linux   # or wherever your source resides
$ patch -p1  /path/to/raid-2.2.14-B1

See the kernel-HOWTO for how to apply patches to the kernel source.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: over 1T problem?

2000-01-21 Thread Marc Mutz

TAKAMURA Seishi wrote:
 
snip
 
 (system configuration)
   RedHat 6.1 (Japanese version)
   kernel 2.2.14 + RAID patch(raid0145-19990824-2.2.11)
snip

Have you tried http://www.redhat.com/~mingo/raid-2.2.14-B1? Some people
reported problems with the 2.2.11 patch applied to 2.2.14.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: raid-2.2.14-B1.14 doesn't patch 2.2.14 properly

2000-01-13 Thread Marc Mutz

Scott Thomson wrote:
 
 Am I missing something here?
 The source has just been freshly untarred from linux-2.2.14.tgz
 This is just the the first prompt. It goes on and on...
 
 patching file `linux/init/main.c'
 Hunk #1 FAILED at 19.
 Hunk #2 FAILED at 488.
 Hunk #3 FAILED at 928.
 Hunk #4 FAILED at 1426.
 4 out of 4 hunks FAILED -- saving rejects to linux/init/main.c.rej
 The next patch would create the file `linux/include/linux/raid/linear.h',
 which already exists! Assume -R? [n]
 
 main.c.rej is as follows
snip

1.) what was the current working directory and the exact patch command
you issued?
2.) did the errors start with the first file to patch or was main.c
further down the list of files to be patched?
3.) What is raid-2.2.14-B1.14? I only know of raid-2.2.14-B1 and that
one patched into my 2.2.14 without rejects.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



[thread from lkml] Re: Software RAID Patches for 2.2.14pre's?

2000-01-02 Thread Marc Mutz

"Homme R. Bitter" wrote:
 
snip
  I was able to get the Software RAID patch raid0145-19990824-2.2.11
  working with the 2.2.13 kernel, but couldn't do it with 2.2.14p18.
interrupted
  Also the RAID patches date back to August.  Any updates since then?

no, not that I know.

 Did you try to patch a plain 2.2.13 source and did a patch to 2.2.14pre18
 after that ?
 I didn't try with pre18, but pre8 worked for me.
 Or did you try to patch the 2.2.14pre18 ?
 Patching a pre in stead of the plain 2.2.13 will probably not work.
snip

The sequence of patching is irrelevant, AFAICT.
If the original poster could give more info, such as the exact error
message, then we can help him better.
Please re-direct your replies to [EMAIL PROTECTED]

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Large files 2GB+ RAID?

1999-12-29 Thread Marc Mutz

Seth Vidal wrote:
 
snip
 
 My understanding is that the bigmem patches are FS patches not memory
 patches - they are inappropriately named perhaps.
 
snip

Bigmem is support for  1G _RAM_. The reson it is in 2.2 and large file
support is not is that the latter breaks libc (and posix?) and the first
one is a pure kernel issue.

Now back to Jason's question:
I meant that you have two options if you insist on having RAID and large
file support in Linux _right now_:
1.) get 64bit hardware and use 2.2.13+raid0145
2.) get a hw-RAID controller and use 2.3.x, which supports 2G files,
you said.

If you don't want to go with any of that, you have the third option
3.) use another OS

If you want to stay true to Linux, you can invest some months of waiting
to gain access to options
4.) wait for Ingo et al. to make sw-RAID 0.90 stable on 2.3/2.4
5.) wait for 2G file support to be backported to 2.2,
both of which is more a matter of believe than of fact, as others have
pointed out. I personally think that raid-0.90 will make it into 2.4 and
do not think that 2G files will ever become part of 2.2.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: SV: never kernel than 2.2.11

1999-12-25 Thread Marc Mutz

Johan Ekenberg wrote:
 
  No-one swaps to md devices, except raid-1 and that only, if one really,
  really needs it. It's just too slow.
 
 Is it really that slow? The md-devices are faster than a regular disk (I use
 RAID 5).
snip

faster than a single disk: yes.
faster than normal swapping: no.

Normally, if you have the option to swap to md devices, you have more
than one disk. The non-md way of swapping would then be:
root# grep swap /etc/fstab
/dev/sda9  swap swapdefaults,pri=1  0   0
/dev/sdb9  swap swapdefaults,pri=1  0   0
/dev/sdc9  swap swapdefaults,pri=1  0   0

i.e. somthing like a raid0 device setup. You know that raid0 is fastest
of all raid levels when the access pattern is something like 50-50 r-w
and small blocks. So any other raid level used will slow things down.

Marc
-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: autostart upgraded Raiddevices (SuSE 6.3)

1999-12-23 Thread Marc Mutz

"Schackel, Fa. Integrata, ZRZ DA" wrote:
 
snip
 Since without the persistent-superblock=1 set, there is no autostart from raidtools.
 
 How can I manage to autostart during bootup?
 Are there also preconfigured scripts in SuSE 6.3?
 If not, can anybody halp me with script fragments ?
snip


SuSE still employs mdtools-0.42.
Look at /sbin/init.d/boot. Just at the beginning it initializes md
devices. Throw that stuff away and replace it with the correspondings
calls to raidstart. The same is true for /sbin/init.d/halt.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/Encryption-HOWTO/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: adding/removing linear raid drives

1999-11-10 Thread Marc Mutz

Glenn McGrath wrote:
 
 Is it possible to add or remove drives with linear-raid non-distructively ?
 I thought i read something about this a while back, but cant track it down.
 
_Try_ this (i.e. don't rely on it!):

root# umount /dev/md0
root# raidstop /dev/md0
...edit /etc/raidtab to include another device at the end of md0...
[root# mkraid /dev/md0] ??? This is the critical point.
...This should work however, if you do not use persitent superblocks.
The above step should then be unnecessary...
root# mount /dev/md0
...should work...
root# umount /dev/md0
root# ext2resize /dev/md0 new size in ext2-blocks
root# mount /dev/md0

I strongly recommend to try this _first_ on really unimportant data
(btw. does raid work on loop devices?) and if it works, you can consider
the critical data _only after a backup_.

Marc

PS: ext2resize is not so critical as it seems. It works very well for me
extending my encrypted lopp device regularly.
http://www.dsv.nl/~buytenh/ext2resize/

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Patch for kernel 2.2.13?

1999-11-03 Thread Marc Mutz

Luca Berra wrote:
 
snip
  Does anyone know when the raid 0.90 patch for kernel 2.2.13 should 
  be released?
  I've looked at kernel.org but latest there is 2.2.11.
 
This one applies fine. You get some rejects, but you can ignore them.

 
 look in /pub/linux/kernel/alan/2.2.13ac
 
With this one, bear in mind that you enter the experimental section of
the stable series of kernels.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Does raid0run work with old-style raid kernels?

1999-10-26 Thread Marc Mutz

Hi out there!

I finally want to update my raid0-arrays to the 0.90 style raid. The
first thing I did was compiling raid-tools-0.90. That went fine.
I then converted my /etc/mdtab into a /etc/raidtab (please check them
for errors, they are attached).
Next step would be editing the init scripts to use raid0run instead of
mdadd -r. I tested this with one of my md's (namely /dev/md2), but I got
an error along the lines of the following:
Q ...
Q mkraid: aborted.
I wonder what mkraid has to do with raid0run'ning old md's on an
old-style kernel? Is raid0run only for use with new-style kernels? If
so, how the heck do I test the raidtab before changing the init scripts
and booting into the new-style kernel[1] with no return?

Marc


[1] I could, of course do something along the lines of
# if [ "$(uname -r)" = "2.2.13" ]; then
# new-style-tools
# else
# old-style-tools
# fi
assuming that I only patch 2.2.13 for the time being. But that would
still only resolve the init-script issue. What about typos in
/etc/raidtab? This is too insecure for my taste.

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Does raid0run work ... missing attachments (sorry)

1999-10-26 Thread Marc Mutz

(this was so's you can check if they are equivalent. TIA.)

# mdtab entry for /dev/md5
/dev/md5raid0,128k,0,008d4eba   /dev/sda10 /dev/sdb10 
# mdtab entry for /dev/md1
/dev/md1raid0,32k,0,7256556d/dev/sda5 /dev/sdb5 /dev/sdc5 
# mdtab entry for /dev/md2
/dev/md2raid0,32k,0,4990be6f/dev/sda6 /dev/sdb6 /dev/sdc6 
# mdtab entry for /dev/md4
/dev/md4raid0,8k,0,222e8667 /dev/sda8 /dev/sdb8 /dev/sdc8 
# mdtab entry for /dev/md3
/dev/md3raid0,8k,0,cd586fbc /dev/sda7 /dev/sdb7 /dev/sdc7 


raiddev /dev/md1
raid-level  0
nr-raid-disks   3
persistent-superblock   0
chunk-size  32k

device  /dev/sda5
raid-disk   0
device  /dev/sdb5
raid-disk   1
device  /dev/sdc5
raid-disk   2

raiddev /dev/md2
raid-level  0
nr-raid-disks   3
persistent-superblock   0
chunk-size  32k

device  /dev/sda6
raid-disk   0
device  /dev/sdb6
raid-disk   1
device  /dev/sdc6
raid-disk   2

raiddev /dev/md3
raid-level  0
nr-raid-disks   3
persistent-superblock   0
chunk-size  8k

device  /dev/sda7
raid-disk   0
device  /dev/sdb7
raid-disk   1
device  /dev/sdc7
raid-disk   2

raiddev /dev/md4
raid-level  0
nr-raid-disks   3
persistent-superblock   0
chunk-size  8k

device  /dev/sda8
raid-disk   0
device  /dev/sdb8
raid-disk   1
device  /dev/sdc8
raid-disk   2

raiddev /dev/md5
raid-level  0
nr-raid-disks   2
persistent-superblock   0
chunk-size  128k

device  /dev/sda10
raid-disk   0
device  /dev/sdb10
raid-disk   1



Re: How many Inodes?

1999-10-26 Thread Marc Mutz

[EMAIL PROTECTED] wrote:
 
snip
 
 With all those tiny files and a huge amount of disk space, be aware that
 the latest version of mke2fs seems to "decide on a block size" based on
 partition size rather than default to 1kb blocks.  I noticed this when Red
 Hat 6.1 gave me a 2.5gb /var partition with 4kb blocks.
 
snip

This is of course desirable. 4k is the page size on x86, which makes the
unified cache in 2.3 (2.4) much faster on such fss. I just yesterday
converted all my fss to 4k block size. /var grew from 84M to 91M.

Also in 2.2.x 4k fss are significantly faster when it comes to deleting
(or truncating) large files or big directories, because you need fewer
blocks for a given file size and therefore increase the threshold for
introduction of {,double,triple?} indirect blocks.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: How many Inodes?

1999-10-26 Thread Marc Mutz

Kent Nilsen wrote:
 
snip
 mke2fs -b 4096 -m 5 -i 8192 -R stride=128
snip     x^^^ = 512k

You don't really have 0.5Mb chunk size, do you??

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: increasing the number of raid and scsi devices

1999-10-14 Thread Marc Mutz

Phil Macias wrote:
 
 Hello, all.
 
 I have been using Linux RAID for some time now with the AHA 2940 U2W
 and SUN Diskpacks. I am at the point where four md devices and sixteen
 SCSI disk devices are not enough.
 
The limit of four md devices is easily lifted by changing MAX_MD_DEV in
md.h. The limit of eight real devices making up one md device can be
increased to 12 by changing MAX_REAL in md.h. Mingo said somthing about
making a patch that could increase this number to 23 or so, see the ml's
archive.
Scsi disks from 16 thru 31 have major 65, see Documentation/devices.txt.

snip

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: networked RAID-1

1999-10-05 Thread Marc Mutz

Tom Kunz wrote:
 
snip
 Linux-based software alternative to the super-expensive external RAID
 towers that have multiple independent SCSI buses.  They run for $10k
 each, and you can connect multiple machines into them, which will all
 mount the array simultaneously.  Any node can go down at any time,
 regardless of any cron schedule, and no data will be lost.
snip

That will do _nothing_ for you, because:

1.) you can only mount it r/w on exactly one machine.
2.) even if 1) is ok for you, you cannot even mount the array ro on the
other machines, because of Linux' disk caching.

Maybe raid1 over NBD (see linux/Documentation/nbd.txt) is what you want,
but I don't know if that works.

Basically, what I think of is the following:
- Machine A is master and has one half of the disk, raid1'ed with the
nbd'ed other half from Machine B, which is the slave.
- If Machine A fails, B detects this somehow, restarts its half of the
raid1 disk in degraded mode, mounts it r/w and takes the place of A.
  If then A comes up again, it treats its half of the array faulty and
reconstructs it after the nbd'ed half of B.
- If Machine B goes down, Machine A's raid 1 falls to degraded mode
until B comes up again, at which point 's ndb half is reconstructed
after A's half.

If this works, you can also add a third machine and make a threefold
raid1 for added HA. Curious myself if this would work. Unfortunately
cannot test this myself.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Problem unmounting...

1999-09-23 Thread Marc Mutz

Hi out there!

This problem is with a raid'ed device, so I feel this is the right place
to ask... It goes like this:

I just wanted to add a third hd to my two-way raid array. I did the
following (I don't remember the exact wording of the messages, so I give
only glipses of them):

# init S
... runlevel S reached
# umount /usr
...device busy
# fuser -k /usr
no processes ...
... do umount /usr
# umount /usr
... device busy

I wanted to copy /usr to a spare partition, then mkraid the new
three-part md-/usr, then copy the data back.
I use SuSE 6.0/6.1 (partly updated), stock raid of 2.2.12 kernel.
I'm not using the new raid, as the old one works very well for raid0 and
linear modes, which I use.

Any ideas why I cannot umount /usr, while /var, /opt, /tmp and /home can
be umounted just fine?
This has struck me before. IIRC, the problem does not come up when I
boot into runlevel 3 (the default) and then immediately switch to
runlevel S.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Reliable SCSI LVD controler for Linux ?

1999-09-09 Thread Marc Mutz

"Stanley, Jeremy" wrote:
 
 I apologize in advance if this is a WAY off-topic tangent, but...
 LVD-HVD bridge?  What controllers have this feature?  Or at least what
 controllers support an HVD channel and an LVD channel simultaneously?
 I'm using an HVD-capable NCR (Symbios Logic) at present and assumed I'd
 have to free up a PCI slot or get a motherboard with more than
 2ISA/5PCI/1AGP which are hard to come by cheaply.  My SW array is HVD
 and the DDS3 I'm trying to install is most definitely NOT.  Any
 suggestions?

You probably  confused LVD and HVD. LVD is newer, more expensive and is
commonly called U2-SCSI. HVD is the 'old' way of doing things.
Also, my reply was not correct insofar as i didn't mean HVD really, but
normal (i.e. not-differential) instead.
As of your question: I don't know. Maybe Adaptec has them.
More-than-one-channel adapters too, if separate LVD devices from non-LVD
devices with the two channels. Separate bridges are rather expensive,
around 150$, I think, so don't go for them unless integrated on the
adapter itself.
I have always used Symbios Logic SCSI adapters and have never had any
problems. They are also very good performance-wise, judging from the
last c't magazine test. They are much cheaper than Adaptec, so you can
safely buy two separate controllers, if you have PCI slots left.
Adaptec controllers are also said to have the most problems with linux,
although that might be only for the fact that they are the most
widely-used ones, too.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Reliable SCSI LVD controler for Linux ?

1999-09-07 Thread Marc Mutz

Hi Hubert!

I use a Symbios Logic U2W controller for my swraid. Upgraded from
Symbios Logic U-Scsi Controller, and what shall I say: Plugged, worked.
They have only one channel, and no lvd-to-hvd bridge, though.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: RAID-0 implementation

1999-08-26 Thread Marc Mutz

You can use raid-linear and do the following if linear mode does not use
superblocks. If it does, or you want to raid-0 the thing, then nothing
saves you from backing up all your data, configuring the raid stuff and
put the backupped data back. But else:

1.) concatenate the filesystems, old fs first, to e.g. /dev/md0
2.) try to mount /dev/md0. should succeed.
3.) unmount it
4.) ext2resize it
5.) mount it again

Note, you better had a good UPS before trying to ext2resize!!
You'll find ext2resize at: http://www.dsv.nl/~buytenh/ext2resize/

Marc
-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: updating arrays from 0.42 to 0.90

1999-08-21 Thread Marc Mutz

Alvin Oga wrote:
snip
 when I tried out our old raid drives with linux-2.2.10...it wouldn;t
 recognize it so I compiled raid-0.42 under 2.2.10 and still
 using the old drives as is... ( for redhat-60 distro )
 - didn't want to lose /home directory on the drives ...
snip
I'm not talking about 2.2.1{01}. I'm using them as well as
2.2.12-forlinus (which has old-style raid) already and happily.

I'm concerned with 2.2.12-{pre*,final*}, as they include _new-style_
raid.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Booting Root RAID 1 Directly _Is_ Possible

1999-08-21 Thread Marc Mutz

Andy Poling wrote:
 
snip
 
 I'm willing to put together a cookbook description, of sorts, to patch and set
 up GRUB to boot RAID 1, and post it to the list.  I guess my question is
 whether there's any interest in such a thing.  It's entirely possible that
 most people are smarter than me, and won't have as much trouble figuring out
 how GRUB works.  :-)
 
snip
Write a GRUB+RAID-mini-HOWTO! That's what they are for...

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: AW: raidstop; raidstart fails

1999-08-20 Thread Marc Mutz

"Schackel, Fa. Integrata, ZRZ DA" wrote:
 
 Hi,
 
 after I applied the raid0145-19990421-2.2.6 patch to my
 new downloaded kernel(2.2.10 from kernel.org) I wanted
 to raise the max devices of a md in md.h.
 But md.h was 0 bytes.
 I don't think it's ok ?!?
 
include/linux/md.h has moved to include/linux/raid/md.h. You can remove
the stale file.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




updating arrays from 0.42 to 0.90

1999-08-20 Thread Marc Mutz

Hi out there!

To all of you that shout for raid-0.90 into kernel 2.2.12:

Would you please tell us how to upgrade old existing raid0/linear
devices (smoothly, IP)?
- The HOWTO that comes with the raidtools-package and the man pages say
not a word about that,
- Several posts of others asking exactly the same question were left
unanswered or received only incompetent answers. (looked at the last 2
months)
- The proposed FAQ still seems to be missing.

Marc

PS: Don't flame me, _tell_ me...

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: updating arrays from 0.42 to 0.90

1999-08-20 Thread Marc Mutz

Fred Reimer wrote:
 
 Well you're wrong at least about the man pages.  In mkraid it explains
 the options --upgrade and --force. interrupted
Well, "explains" is a little bit too much said, but yes, it is there.

 I think you need both in order to
 upgrade existing RAID arrays.  Presumably you would
 create your /etc/raidtab to match the configuration of
 your existing RAID partitions, run mkraid --upgrade --force, and
 everything would magically be upgraded.  Unfortunately I can't test 
 this for you right now...
 
 It's my impression that it would be preferred, if at all possible, to
 recreate your array from scratch and restore the data from another
 partition or tape or whatever you have available.  I don't know if the
 persistent superblocks would get setup during an upgrade, or if you can
 change stuff like the chunksize or not.  Depending on how you have your
 old array setup it may be perferrable to recreate them...
 
OK, so proposed way to go is:
1.) install raidtools-0.90 (done)
2.) d/l, compile, install 2.2.12-final (or patch a kernel) (done)
At this point I have both old and new raidtools available. But how
should I proceed?
3a) with old kernel: init S and umount all mdx (if possible), mdstop
them and mkraid --upgrade (--force). (does this work on old kernel?)
4a) reboot to new kernel, then re-create all with
persistent-superblock=1?
-or-
3b) boot into new kernel - fsck complains about zero-length partitions
(or similar) - sulogin - mkraid --upgrade (--force) - exit - done?
4b) re-create all md's (I will have to anyway, because I just ordered
another drive that I would like to add to the array)

 I hope you don't consider this a flame and somewhat useful...
 
snip
No, thanks for the quick answer :-)

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: raidhotadd

1999-08-13 Thread Marc Mutz

Andreas Gietl wrote:
 
snip
 
 
 # Das Handbuch sagt, das Programm benötige #
 #  Windows 95 oder besser. Also habe ich   #
 #  Linux installiert!  #
 

translation
The manual said, the program needs Windows 95 or better.
So I installed Linux!
/translation

Marc :-)

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: badblocks crashes RH60!

1999-07-31 Thread Marc Mutz

update your version of badblocks or re-compile against the new headers.
If that does not help, notify the author.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Problem with md - md.h

1999-07-31 Thread Marc Mutz

Hi Sonia!

Sonia de Diego Atance wrote:
 
snip
 
 mdadd /dev/hdc /dev/hdd
 
should that not be 'mdadd /dev/mdx' with $x\in\{0,1,2,\ldots\}$ (you
know TeX, don't you)? Also, with raidtools-0.41 you _don't_ need the
raid patches for linear and raid0. I know that, 'cause I have four md
devices running with my 2.2.10-int-4.

 /dev/hdc: No such device
 /dev/hdd: No such device
 
 I have tried it with partitions but the output is the same. I do not what I
 can do!!!. Furthermore, I can not compile new kernel, I have a lot of
 errors and warnings in file init/main.c. Is it possible that the patch is
 not well installed?.
 
Did you remove the 'linux' symlink prior to extracting the tarball?
Did you 'make dep' after 'make {,x,menu}config'?
These two are mostly the mistakes that lead to compiling errors.

 Thanks a lot.
 
First, look if this helps you :-)


-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Suggestions for running RAID5 (3 disks): buy 2 extra controllers??

1999-07-27 Thread Marc Mutz

Kiyan Azarbar wrote:
 
 I would like to run RAID5 (3 disks in the array, maybe some room in the future
 for one spare disk). Right now I've got my linux root partition (well, pretty
 much everything is under /, it's meant to be a server) on /dev/hda (4 gig
 Quantum CR). I have 3 12 gig Quantum EX's: /dev/hdb, hdc, and hdd. I can set
 up RAID5 like this but in the HOWTO (and common sense) it states that running
 2 disks on the same IDE channel is not a good idea. Slave mode is supposed to
 slow things (although if there aren't paraller reads or writes on one channel,
 what's the big deal?) interrupted
You _always_ read in parallel when using raid5. That's what makes it
fast.

 continued and also if a master drive goes down there's supposed to
 be a chance it'll bring down the whole channel, failing TWO disks which is an
 unrecoverable error.
 
 So my question is, should I get the Promise Ultra66? 

You should have bought SCSI disks. They may would have been cheaper,
too, because you need only one controller for three disks. (Sorry -could
not resist :-)

snipped rest

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: Something about RAID 1

1999-07-26 Thread Marc Mutz

Hi Roberto!

That's a FAQ and there's a howto available: Root-Raid-(mini?)-Howto,
although it might be a little outdated. Also read the appropriate files
in /usr/src/linux/Documentation.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Swap on Raid ???

1999-07-14 Thread Marc Mutz

Roeland M.J. Meyer wrote:
 
snip
  RAID-1 is faster? since when? RAID-5 should be faster at reads. I get
  ~25MB/s sustained read across 4 U/W disks, 16MB/s sustained write
  according to bonnie. (i've never tried RAID-1 to be honest).
 
 I think that he's talking about RAID10. Take two RAID1 devices and bond
 them with RAID0.
 
snip
You don't want to use RAID0 with swap. Making seperate swaps and giving
them the same priority in fstab does the same w/o the penalty of an
additional layer in the path.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: Swap on Raid ???

1999-07-13 Thread Marc Mutz

Why does anybody want to use swap-on-RAID with any RAID level than 1?
Wouldn't it be much faster if you used multiple swap spaces?

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)




Re: FAQ

1999-07-09 Thread Marc Mutz

Bruno Prior wrote:
 
 It strikes me that this list desperately needs a FAQ. I'm off on holiday for the
 next two weeks, but unless someone else wants to volunteer, I'm willing to put
 one together when I get back. If people would like me to do this, I would
 welcome suggestions for questions to go in the FAQ.
 
Whoever volunteers: The first answer should summarize which version of
{md,raid}tools works with which kernel patched with{,out} patch XY.
Can't think of an question for that, though.

IMO it is very necessay to clear the fog that has laid itself across
raid-with-linux in the last weeks or so.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



Re: raid on raid

1999-07-02 Thread Marc Mutz

Lawrence Dickson wrote:
 
A second question - I seem to remember a limit of 12 disks on any Linux
 raid. Is this for real? If so is there a way around it?
Check md.h. There is a parameter MAX_REAL_DEV (or so) that you can
alter.

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS), 0x31748570 (DH)




Re: RAID-0 Slowness

1999-06-30 Thread Marc Mutz

D. Lance Robinson wrote:
 
 Try bumping your chunk-size up. I usually use 64. When this number is low,
 you cause more scsi requests to be performed than needed. If really big (
 =256 ) RAID 0 won't help much.
 
What if the chunk size matches ext2fs's group size (i.e. 8M)? This would
give very good read/write performance with moderatly large files (i.e.
8M) if multiple processes do access the fs, because ext2fs usually
tries to store a file completely within one block group. The performance
gain would be n-fold, if n was the number of disks in the raid0 array
and the number of processes was higher than that.
It would give only single-speed (so to speak) for any given application,
though.
But then: Wouldn't linear append be essentially the same, given that
ext2fs spreads files all across the block groups from the beginning?

Would that not be the perfect setup for a web server's documents volume,
with MinServers==n? The files are usually small and there are usually
much more than n servers running simultaneously.

Is this analysis correct or does it contain flaws?
What be the difference between raid0 with 8M chunks and linear append?

Just my thoughts wandering off...

Marc




Re: RAID-0 Slowness

1999-06-29 Thread Marc Mutz

Richard Schroeder wrote:
 
 Help,
 I have set up RAID-0 on my Linux Redhat 6.0.  I am using RAID-0
 (striping) with two IDE disks (each disk on it's own IDE controller).
 No problems in getting it running.  However, my tests show I/O
 performance seems to be worse than on a "normal" non-RAID filesystem.  I
 have tried different chunk-sizes to no avail.  I must be missing
 something.  Shouldn't I be seeing a slight performance gain?
 
Have you (U)DMA enabled? If not, I guess the CPU load imposed by reading
simultaneously is too high, so performance is in fact lost? Just
thinking aloud, never had nor ever will be troubling myself with that
IDE crap, always use SCSI :-)

Marc




Re: Upgrading RAID

1999-01-03 Thread Marc Mutz

Sean Roe wrote:
 
 Is there a procedure for adding more drives to a RAID system and increasing
 the size of the partitions?  We have mylex Accellaraid 250's (sp?) driving
 the RAID.  I am a little lost as to how to do it.  I mean when and if the
 Mysql server ever breaks 10-12 gig of data I would like to have an easy way
 out.
 
There's a raidreconf utility (see the thread some days ago) and the
ext2resizer (search freashmeat).

Marc

-- 
Marc Mutz [EMAIL PROTECTED]http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)