Re: [gentoo-user] OT: System redesign

2003-11-07 Thread MAL
The Plan:

Disks:
--
Quantum 8GB
IBM 42GB (previous root, probably dead - not gonna use)
WD1 120GB
WD2 120GB
CD-RW
DVD-ROM
Connections:

CD-RW   IDE1 Master (hda)
DVD-ROM IDE1 Slave  (hdb)
Quantum IDE2 Master (hdc)
WD1 IDE3 Master (hde) \_ promise controller
WD2 IDE4 Master (hdg) /
Partitions:
---
hdc:
8GB - NTFS (0x07)
hde:
64MB - Linux (0x83)
8GB - NTFS (0x07)
1GB - Linux Swap (0x82)
Remainder - Linux RAID autodetect (0xfd)
hdg:
64MB - Linux (0x83)
8GB - NTFS (0x07)
1GB - Linux Swap (0x82)
Remainder - Linux RAID autodetect (0xfd)
Windows:

hdc1:
c:\ - 8GB
hde2  hdg2:
d:\ (software RAID0) - 16GB (fast for games)
Linux:
--
hde1:
ext2 - /boot1
hdg1:
ext2 - /boot2 (not really needed)
hde3  hdg3:
striped swap
hde4  hdg4:
RAID0, chunk-size 32
LVM
/   - ext3
/usr- ext3
/usr/portage- ext2
/var- ext3
/home   - ext3
/mnt/storage- ext3
This will require an initrd for both RAID and LVM, but i'm prepared to 
maintain that.

I'm only using Ext2/3, as XFS just won't mix with the kernels I want to 
use, and ReiserFS isn't really appropriate in this setup is it?  Can 
anyone see where JFS would be a better choice than Ext3?

EVMS2/LVM2 seem too hard to maintain in a 2.4 series with Win4Lin and 
others present.

Any other suggestions are welcome,
Cheers,
MAL
--
[EMAIL PROTECTED] mailing list


Re: [gentoo-user] OT: System redesign

2003-11-07 Thread Spider
begin  quote
On Fri, 07 Nov 2003 13:18:16 +
MAL [EMAIL PROTECTED] wrote:

This looks quite good.

 This will require an initrd for both RAID and LVM, but i'm prepared to
 maintain that.


 
 I'm only using Ext2/3, as XFS just won't mix with the kernels I want
 to  use, and ReiserFS isn't really appropriate in this setup is it? 
 Can anyone see where JFS would be a better choice than Ext3?

Hmm, it depends really.  in my tests JFS was really good over time, but
ext2 and 3 are far more well-known and therefore much more attention on
them. its an on/off.  My only advice is to try and see :) (reduce the
store and have a test partition for second-  /usr for a while on one
and see? )




 EVMS2/LVM2 seem too hard to maintain in a 2.4 series with Win4Lin and 
 others present.

No real experience there, sorry.


//Spider
-- 
begin  .signature
This is a .signature virus! Please copy me into your .signature!
See Microsoft KB Article Q265230 for more information.
end


pgp0.pgp
Description: PGP signature


[gentoo-user] OT: System redesign

2003-11-05 Thread MAL
My setup as it stands:

Ob-board IDE controller:
hda: 8GB HDD
c:\ (WinXP)
hdb: none
hdc: CD-RW
hdd: DVD-RW
On-board Promise IDE ('raid') controller:
hde: 42GB HDD
/boot (Gentoo)
/ (Gentoo)
hdf: none
hdg: none
hdh: none
The machine is an Athlon XP 1700+, 512MB RAM, and basically I'm tired of the 
limited performance.  My next upgrade step will be a dual CPU motherboard 
(Athlon-MP, AMD-64, who knows), but for now I decided the disk was a bottleneck.
I'm on a pretty limited budget, so after deciding serial-ata was too pricey 
(I'd need a controller card), for the performance gain, I decided on buying 
2x Western Digital JB (120GB, 8MB cache), as i've already used these drives 
and can vouch for their performance.

The idea is to RAID-0 them, LVM/EVMS that, and partition on top.
I can wipe and reuse the 42GB and 8GB disks and move them around, but I need 
WinXP (preferably with more space than 8GB - blame the games :)

I'm really just asking if anyone has any recommendations for partition 
layout, what filesystems on what partitions, and what general structure 
would be fastest?

Would swap benefit from being on another disk?

Does fragmentation play a part over time?

What block size for the RAID-0 array?

Cheers for any help!
MAL
--
[EMAIL PROTECTED] mailing list


Re: [gentoo-user] OT: System redesign

2003-11-05 Thread William Kenworthy
I would add to spiders comments that you can create two same size swap
partitions on each disk and have them mount at the same priority in
fstab.  The kernel can then access the swaps in a similar fashion to
raid0.  Also, in this day and age of cheap disk space, go overboard with
space if you have ever even came close to filling swap up (I use two 1G
partitions, and dearly wish I had set them to 2 G!).  Too hard to fix
afterwards!

BillK

On Thu, 2003-11-06 at 02:45, Spider wrote:
 begin  quote
 On Wed, 05 Nov 2003 15:18:32 +
 MAL [EMAIL PROTECTED] wrote:
 



--
[EMAIL PROTECTED] mailing list



RE: [gentoo-user] OT: System redesign

2003-11-05 Thread Jeffrey Smelser
This depends and correct me if I am wrong, when you use scsi, you can get this benefit 
of using two different hard drives and getting a performance benefit. The last I 
remembered, if your using ide, this is not the case unless you have  them on two 
different ide cards.. Even though an ide card can handle two hard drives, it can only 
read from them one at a time hence the performance wouldn't be there...

just a side note.

 I would add to spiders comments that you can create two same size swap
 partitions on each disk and have them mount at the same priority in
 fstab.  The kernel can then access the swaps in a similar fashion to
 raid0.  Also, in this day and age of cheap disk space, go 
 overboard with
 space if you have ever even came close to filling swap up (I 
 use two 1G
 partitions, and dearly wish I had set them to 2 G!).  Too hard to fix
 afterwards!

--
[EMAIL PROTECTED] mailing list



Re: [gentoo-user] OT: System redesign

2003-11-05 Thread Chad Leigh -- Shire.Net LLC
On Nov 5, 2003, at 3:48 PM, Jeffrey Smelser wrote:

This depends and correct me if I am wrong, when you use scsi, you can 
get this benefit of using two different hard drives and getting a 
performance benefit. The last I remembered, if your using ide, this is 
not the case unless you have  them on two different ide cards.. Even 
though an ide card can handle two hard drives, it can only read from 
them one at a time hence the performance wouldn't be there...
I believe that is only true if they are on the same cable.  Each 
interface has its own master/slave pair which I believe are 
independent.  Most systems have at least two IDE buses (connectors) and 
many now have 4 or more :-)

Chad

just a side note.

I would add to spiders comments that you can create two same size swap
partitions on each disk and have them mount at the same priority in
fstab.  The kernel can then access the swaps in a similar fashion to
raid0.  Also, in this day and age of cheap disk space, go
overboard with
space if you have ever even came close to filling swap up (I
use two 1G
partitions, and dearly wish I had set them to 2 G!).  Too hard to fix
afterwards!
--
[EMAIL PROTECTED] mailing list


--
[EMAIL PROTECTED] mailing list


Re: [gentoo-user] OT: System redesign

2003-11-05 Thread William Kenworthy
Yes, they (physical disks) *must* be on different ide cables
(interfaces) and no atapi or other devices.  But if you have set it up
properly as raid0, this will already be the case.  It would be nice to
put swap on a couple of separate, small but very very fast hard drives -
but I have not heard of anything suitable, and then there is cost to
consider.  Comes down to best bang for buck ...

BillK

On Thu, 2003-11-06 at 07:10, Chad Leigh -- Shire.Net LLC wrote:
 On Nov 5, 2003, at 3:48 PM, Jeffrey Smelser wrote:
 
  This depends and correct me if I am wrong, when you use scsi, you can 
  get this benefit of using two different hard drives and getting a 
  performance benefit. The last I remembered, if your using ide, this is 
  not the case unless you have  them on two different ide cards.. Even 
  though an ide card can handle two hard drives, it can only read from 
  them one at a time hence the performance wouldn't be there...
 
 I believe that is only true if they are on the same cable.  Each 
 interface has its own master/slave pair which I believe are 
 independent.  Most systems have at least two IDE buses (connectors) and 
 many now have 4 or more :-)
 



--
[EMAIL PROTECTED] mailing list



Re: [gentoo-user] OT: System redesign

2003-11-05 Thread MAL
Thanks very much for this info...

Spider wrote:
begin  quote
On Wed, 05 Nov 2003 15:18:32 +
MAL [EMAIL PROTECTED] wrote:
I'm really just asking if anyone has any recommendations for partition
layout, what filesystems on what partitions, and what general
structure would be fastest?


I'm a nut, I've had severe problems with harddrives the last year
(several broken ones.. ) so my suggestion is as follows:
'
/boot.  one on each drive, duplicate manually and make sure the system
can boot off either. Install bootloader on both. Just in case one
drive dies.
I have the RAID/LVM, including root, setup on a server I run, so I'm aware 
of this.. but is it really worth in on a RAID-0 setup?  If one disk dies, 
I've lost the lot, and there's always boot from CD :)

/home  :  backup and keep as small as possible, performance degrading
tasks shouldn't be accessing /home overly much anyhow. (bad tasks ;)
Ok, LVM/EVMS says I don't have to worry about sizes too much :)
Speaking of which, Ext2/3 and ReiserFS have filesystem resize tools, what 
about XFS  JFS?

/  :  around 3-4 gb partition. perhaps more
Does XFS support extended attributes?  I am attracted more to Ext3 for this 
fact, and for ease of compiling kernels, (I use Win4Lin).

/var :  'about 1 Gb, I usually have this as ext3 with data journal mode.
its not performance limiting my system, and I dang well want my logs
when things break.
/usr/portage : on its own partition, around 5 gb. use ext2 here. best
erformance, and its no valuable data, nothing thats even difficult to
recover.
Or Ext3 in writeback mode?

if you want to use ccache, up this with another one or two gb for the
cache-dir, and move that here.
move temp-build to /usr/portage/TEMP or other, just to keep it on your
raided fast drive.
Other data (aka /mnt/store ;)  ,  recent tests suggest that JFS and XFS
both perform very good and with very low CPU overhead. Reiserfs has bad
case of CPU slaughter. 

http://fsbench.netnation.com/
So all the above partitions on the RAID disks?

//Spider
Thanks,
MAL
--
[EMAIL PROTECTED] mailing list


Re: [gentoo-user] OT: System redesign

2003-11-05 Thread Spider
begin  quote
On Wed, 05 Nov 2003 23:43:19 +
MAL [EMAIL PROTECTED] wrote:

 
 
 I have the RAID/LVM, including root, setup on a server I run, so I'm
 aware  of this.. but is it really worth in on a RAID-0 setup?  If one
 disk dies,  I've lost the lot, and there's always boot from CD :)

if such is the case, then its not worth it.  :)

 
  /home  :  backup and keep as small as possible, performance
  degrading tasks shouldn't be accessing /home overly much anyhow.
  (bad tasks ;)


 
 Ok, LVM/EVMS says I don't have to worry about sizes too much :)
 Speaking of which, Ext2/3 and ReiserFS have filesystem resize tools,
 what  about XFS  JFS?

I know jfs can at least grow, not sure about shrinking. no real
experience with XFS.

 
  /  :  around 3-4 gb partition. perhaps more
 
 Does XFS support extended attributes?  I am attracted more to Ext3 for
 this  fact, and for ease of compiling kernels, (I use Win4Lin).

Yeah, it does, as do JFS, but jfs is standard and XFS requires loads of
patches (ergo, win4lin might not work with xfs )



  /usr/portage : on its own partition, around 5 gb. use ext2 here.
  best performance, and its no valuable data, nothing thats even
  difficult to recover.
 
 Or Ext3 in writeback mode?


Actually ext2 is still faster in performance, and you shouldn't need a
journal there anyhow.  

 
  
  Other data (aka /mnt/store ;)  ,  recent tests suggest that JFS and
  XFS
  both perform very good and with very low CPU overhead. Reiserfs has
  bad
  case of CPU slaughter. 
  
  http://fsbench.netnation.com/
 
 So all the above partitions on the RAID disks?
 

Yep, if you want to go that way. i still prefer /home and / as well as
/boot outside of the Raid, and duplicated on both disks. thats way my
system has some chance of recovering if things die. without reinstalling
;)

//Spider




-- 
begin  .signature
This is a .signature virus! Please copy me into your .signature!
See Microsoft KB Article Q265230 for more information.
end


pgp0.pgp
Description: PGP signature