> -----Original Message-----
> From: Mike Castle [mailto:[EMAIL PROTECTED]]
> 
> Man, that sounds like a bad personals ad.
> 
> Anyway, just getting into RAID.  Non-critical home system 
> that I just want
> to play around on.  It's an older 233MHz based system running IDE.  In
> otherwards, I'm not too concerned about performance.  Though 
> if I could
> arrange things to not hurt performance, and perhaps even 
> increase things,
> that would be great.

Heh, that's a lot faster than my primary RAID testing system.  I run a P166
with 128MB of ram, and an HP AHA-2920 (with bios) to test.  It's not fast,
but it's rock solid stable.

> Running 2.2.17 with software RAID patches.

Are you using the new RAID tools?

> Running 2 built in IDE controllers (Intel 430VX chipset, not 
> that great,
> but free), and an old AWE32 as a 3rd controller.  As I said, 
> I'm not in it
> for the speed.  On the other hand, I have these as my harddrives:
> 
> hda: Maxtor 91303D6, 12427MB w/512kB Cache, CHS=12624/32/63, (U)DMA
> hdb: Maxtor 53073U6, 29311MB w/2048kB Cache, CHS=59554/16/63, (U)DMA
> hdc: Maxtor 93652U8, 34837MB w/2048kB Cache, CHS=4441/255/63, (U)DMA
> hdd: Maxtor 92048U8, 19470MB w/2048kB Cache, CHS=39560/16/63, (U)DMA
> hdh: Maxtor 53073H6, 29311MB w/2048kB Cache, CHS=59554/16/63

UDMA drives on PIO mode 3 controllers?  Is that last drive really not UDMA?

> Yeah.  About 120G or so.

That's a fair chunk of disk...

> Anyway, I want to parition them into something like this:
> 
> /          128M
> /var        64M
> /tmp       128M
> /usr/tmp   128M
> /var/tmp   128M   (yeah, figured might as well keep all three 
> separate)
> /home       10G
> /usr        10G
> /var/spool/news 25G (leafnode mirror, don'tcha know)
> /usr/src    15G
> /usr/mirror 40G
> /usr/music  13G

Looks like 11 slices, not including swap.

> Plus probably 64M of swap on each physical drive.

So you want 320MB of swap?  You should probably just put your fastest 3
drives as masters, and have 1 swap partition on each.  

> Numbers don't quite add up.  Probably extra to mirror.

Mirroring cuts your space in half, obviously

> Anyway, obviously I can't follow the recommended procedure of no slave
> drives.  So I'm going to face IDE contention.  Probably best 
> I can do is
> reduce head movements.

Avoiding slaves is not only for speed, but also for reliability.  Often when
a master drive is lost, the entire IDE channel goes down.

> I'm considering keeping /, /var, and {/usr,/var,}/tmp as non RAID.
> Probably one per disk just to spread things out.  Then taking 
> the rest of
> the disk space and using it with RAID.

O.K.

> The biggest question is, would it make more sense to spread 
> everything out
> across all 5 drives, or to go no more than 2-3 drives for a particular
> partition?  If no more than 2-3 drives, would it make sense 
> to put say,
> both as masters, or have 1 master, 1 slave for a particular 
> md partition?

Go with a max of 3 drives, 1 partition on each, for each md device.  I'd put
non-critical stuff onto your slave drives (which I would also make the
slowest drives).  That will give you the best reliability and speed, and
since this system isn't all that fast to begin with, maximizing speed is a
good thing.

> One thing I was considering was just dividing the disk into arbitrary
> chunks of 1-2G.  Then just piecing them together.  Of course, 
> partitions on
> the same drive would be linear, then across drives would be 
> raid-0.  I was

I'm not sure how the RAID code handles that.  I can't think of any reason
that I'd want to chop things up like that, especially considering how much
space you've got available.  At 2GB chunks, that's about 60 chunks.  Yikes.

> considering doing it like this so that, if necessary, I could reassign
> slabs and moves things around as necessary (after doing things like
> resize2fs and mkraid).  Would this be feasible (running out 
> of md devices)?

I don't know.  Linux uses the traditional DOS partition model by default, so
that gives you a max of 7 useable slices per drive.  I generally tend to
keep as few different slices on a disk as I possibly can.  My single disk or
mirrored machines have 3 slices per drive.  /boot, swap, and /, in that
order.  Machines with more hard disks gets a bit more complex, and it really
depends on the application.  

> Hurt performance (going through both raid0 and linear)?  Too much like
> micro managing and it's just not worth it?

I don't think you can layer RAID levels using the 2.2.x kernels.  The only
levels that will even work are RAID 1 combined with RAID 0, and those have a
fatal bug where it doesn't recognize the failure of a disk, so that defeats
the purpose of RAID 1.  If you haven't already done so, you should read the
Multi-Disk System Tuning HOWTO.  It's available from
http://www.LinuxDoc.org/ and it's mirrors.  HTH,
        Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to