Re: Using Alioth Opteron net install to config MD raid1

2005-05-27 Thread Lennart Sorensen
On Sun, May 22, 2005 at 04:13:54PM +1000, Ashley Flynn wrote:
 Yeah that's what I'm forced to do for installs currently. It's a total 
 pain in the ass though compared to how easy it should be. Lots of 
 partition juggling is no fun.
 
 I have had this problem since trying amd64 sarge from like 4 months ago 
 so it doesn't seem to matter what build of the sarge installer I use so far.
 
 Anyone sucessfully used the installer to install a RAID5 root partition?

Software raid5?  How are you making /boot?  I don't think either lilo or
grub does software raid5.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-26 Thread Ashley Flynn

Hi

Yeah that's what I'm forced to do for installs currently. It's a total 
pain in the ass though compared to how easy it should be. Lots of 
partition juggling is no fun.


I have had this problem since trying amd64 sarge from like 4 months ago 
so it doesn't seem to matter what build of the sarge installer I use so far.


Anyone sucessfully used the installer to install a RAID5 root partition?

Ashley

Mike Reinehr wrote:


On Saturday 21 May 2005 02:53 am, Goswin von Brederlow wrote:
 


Anyone got any ideas?
 


No ideas apart from being speachless. Doesn't look like an straight
forward bug and fix.

   


Thanks

Ashley
 


MfG
   Goswin
   



From a technical standpoint I certainly can not add anything to what Goswin 
just has said, but I did have one thought. If you can install root 
successfully to a non-raid partition, have you tried converting that over to 
RAID. Use some of your remaining free space to create a RAID partition and 
copy your root partition over to it (I've done something similar on several 
occasions with Knoppix). Then edit your boot loader  have a go. (That's 
particularly easy with GRUB.)


HTH's

cmr
 




--
Ashley Flynn
Damit Australia Pty Ltd
m 0403 534 754
p (02) 6262 6308
e [EMAIL PROTECTED]
w http://www.damit.com.au/


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-21 Thread Goswin von Brederlow
Ashley Flynn [EMAIL PROTECTED] writes:

 Hi all

 Trying to install daily build of sarge from 19052005 on to a RAID 5 system.

 I am using the businesscard netinst image. I am using

What image exactly? Alioth does not have a May 19th businesscard image
so I'm a bit confused.

Do you mean the image from cdimage.debian.org?

 http://debian.csail.mit.edu/ as the mirror and testing as the
 version.

 Root / is RAID 5.
 /boot is just on a standard ext3 partition.

 I am using expert install.

 No matter which kernel option I choose after getting all the packages
 I get the following error when the installer gets to apt-get -y
 install kernel-image:

 dpkg: warning, architecture 'amd64' not in remapping table.

That would imply you have a dpkg prior to amd64 support (which would
not even build for amd64). This kind of can't happen.

 This message spams up the third terminal screen infinitely and the
 installer hangs.

 I can get it back by going and killing the hung apt-get -y install
 process.

 After I do that I get the message:

 /usr/sbin/mkinitrd: Cannot determine SCSI module
 Failed to create initrd image.

 I do not know if what I see after I kill the bad process is significant.

No, after the kill mkinitrd is bound to fail horribly. Ignore
everything past the kill.

 If I do not use a RAID root partition, the system installs fine.

Huh? Having root as raid or not changes the dpkg capabilities? Sorry
for the disbelieve but this would be one freakish bug.

If I weren't using raid5 myself (not for / though) I would assume
raid5 was broken and caused file corruption or something.

 System specs are:

 Tyan Dual AMD M/B
 2x Opteron 250 CPUs
 SATA HDDs

 Anyone got any ideas?

No ideas apart from being speachless. Doesn't look like an straight
forward bug and fix.

 Thanks

 Ashley

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-21 Thread Mike Reinehr
On Saturday 21 May 2005 02:53 am, Goswin von Brederlow wrote:
  Anyone got any ideas?

 No ideas apart from being speachless. Doesn't look like an straight
 forward bug and fix.

  Thanks
 
  Ashley

 MfG
         Goswin

From a technical standpoint I certainly can not add anything to what Goswin 
just has said, but I did have one thought. If you can install root 
successfully to a non-raid partition, have you tried converting that over to 
RAID. Use some of your remaining free space to create a RAID partition and 
copy your root partition over to it (I've done something similar on several 
occasions with Knoppix). Then edit your boot loader  have a go. (That's 
particularly easy with GRUB.)

HTH's

cmr
-- 
Debian 'Sarge': Registered Linux User #241964

More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC




Re: Using Alioth Opteron net install to config MD raid1

2005-05-20 Thread Lennart Sorensen
On Thu, May 19, 2005 at 05:15:34PM -0500, Adam Majer wrote:
 Not true. It depends on the type of disk failure. Most of the disk
 failures that I've had dealt with the drive grinding to a halt or some
 other mechanical thingy in the drive breaking. Software raid detects
 that (cannot read drive) and kicks the drive out. The admin can then
 spin down the drive, etc.. (hdparm).
 
 Modern IDE drives are quite good at detecting their own failures.
 Virtually all have S.M.A.R.T. just like SCSI. Of course, SCSI is still
 more reliable than IDE :)

I agree with most of that.  As for scsi being more reliable, I won't
agree at all.  Too many IBM scsi disks dying on me for that to be the
case.

As long as you NEVER run two ide drives on one cable, they tend to
survive disk failures very well when using raid.  If swap is NOT on raid
however the system will fall over in horrible ways when it can't
read/write that part of swap anymore.

Not having swap on raid is just insane if any part of the system is
running raid.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-20 Thread Lennart Sorensen
On Fri, May 20, 2005 at 07:15:23AM +0200, Goswin von Brederlow wrote:
 Except those cheap IDE controlers on most home systems tend to lock up
 completly or otherwise crash in those cases very often.
 
 At least that is my experience over the years. But then I don't have
 much expensive and shiny new equipment or much experience on the new
 stuff I have (since it is new, hasn't failed yet :).

I know on promise 202xx controllers I have never had a problem if a
drive died, it just flagged as dead in /proc/mdstat, and I scheduled
time soon after to replace the drive and rebuild the mirror.  System
didn't mind what so ever.  Not sure how onboard ide controllers deal
with it.  The SATA ones I have dealt with seemed ok with drive failures
although I haven't had any real SATA fialures yet, only me pretending by
yanking the power to a drive.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-20 Thread Ashley Flynn
Hi all
Trying to install daily build of sarge from 19052005 on to a RAID 5 system.
I am using the businesscard netinst image. I am using 
http://debian.csail.mit.edu/ as the mirror and testing as the version.

Root / is RAID 5.
/boot is just on a standard ext3 partition.
I am using expert install.
No matter which kernel option I choose after getting all the packages I 
get the following error when the installer gets to apt-get -y install 
kernel-image:

   dpkg: warning, architecture 'amd64' not in remapping table.
This message spams up the third terminal screen infinitely and the 
installer hangs.

I can get it back by going and killing the hung apt-get -y install 
process.

After I do that I get the message:
   /usr/sbin/mkinitrd: Cannot determine SCSI module
   Failed to create initrd image.
I do not know if what I see after I kill the bad process is significant.
If I do not use a RAID root partition, the system installs fine.
System specs are:
   Tyan Dual AMD M/B
   2x Opteron 250 CPUs
   SATA HDDs
Anyone got any ideas?
Thanks
Ashley
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Using Alioth Opteron net install to config MD raid1

2005-05-19 Thread Goswin von Brederlow
Rupert Heesom [EMAIL PROTECTED] writes:

 I'm using an Alioth net install CD to put Debian onto a new dual opteron
 PC using 2 SATA drives.

 I'm getting confused during the drive setup process.

 What I'm wanting to do is use the raid 1 setup with each disk having 3
 partitions (as in a workstation install):  root, swap, home.

 How do I setup md to work with these partitions?

 I seem to be able to use EITHER the 3-partition structure OR the RAID 1
 structure (the install puts 1 ext3 partition into the RAID1 device).

 Can I do what I want to with this install?

 If I am not able to change these options, can I at least change the ext3
 file system to an XFS system?  (I think XFS is cool!)

 -- 
 Rupert Heesom [EMAIL PROTECTED]

You can't use a preset scenario for that but have to go the manual
way. There you have to first create 3 partition on each disk and then
setup raid on those partitions.

Do you realy want swap as raid1? It is unlikely that the system will
live through a disk failure with ide disks anyway. Most of the time a
reboot is required if not even unplugging the broken disk.

Also consider using lvm on raid. My suggestion is:


disk 1
  part 1 - 200 MB raid 1 /
  part 2 - swap
  part 3 - rest raid 1 LVM

disk2
  part 1 - 200 MB raid 1 /
  part 2 - swap
  part 3 - rest raid 1 LVM

LVM
  /var - 1 GB (more if you want squid or similar services there)
  /usr - 2-4GB
  /home - rest - 2 GB
  2GB empty space to enlarge /var or /usr or for snapshots

/tmp - tmpfs

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-19 Thread Adam Majer
Goswin von Brederlow wrote:

Do you realy want swap as raid1? It is unlikely that the system will
live through a disk failure with ide disks anyway. Most of the time a
reboot is required if not even unplugging the broken disk.
  


Not true. It depends on the type of disk failure. Most of the disk
failures that I've had dealt with the drive grinding to a halt or some
other mechanical thingy in the drive breaking. Software raid detects
that (cannot read drive) and kicks the drive out. The admin can then
spin down the drive, etc.. (hdparm).

Modern IDE drives are quite good at detecting their own failures.
Virtually all have S.M.A.R.T. just like SCSI. Of course, SCSI is still
more reliable than IDE :)

- Adam


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-19 Thread Goswin von Brederlow
Adam Majer [EMAIL PROTECTED] writes:

 Goswin von Brederlow wrote:

Do you realy want swap as raid1? It is unlikely that the system will
live through a disk failure with ide disks anyway. Most of the time a
reboot is required if not even unplugging the broken disk.
  


 Not true. It depends on the type of disk failure. Most of the disk
 failures that I've had dealt with the drive grinding to a halt or some
 other mechanical thingy in the drive breaking. Software raid detects
 that (cannot read drive) and kicks the drive out. The admin can then
 spin down the drive, etc.. (hdparm).

 Modern IDE drives are quite good at detecting their own failures.
 Virtually all have S.M.A.R.T. just like SCSI. Of course, SCSI is still
 more reliable than IDE :)

 - Adam

Except those cheap IDE controlers on most home systems tend to lock up
completly or otherwise crash in those cases very often.

At least that is my experience over the years. But then I don't have
much expensive and shiny new equipment or much experience on the new
stuff I have (since it is new, hasn't failed yet :).

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-16 Thread Rupert Heesom
Thanks for your comprehensive reply!

Using your post, I've managed to configure my raid 1 MD devices.  I am
interested though in doing LVM later. I have done LVM in earlier days
but without MD in the mixture.

Is it possible to add LVM into my config when I need it.  The partitions
I've created are:  /boot, swap, / [root].


On Fri, 2005-05-13 at 14:01, Lennart Sorensen wrote:

-- 
Rupert Heesom [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-16 Thread Lennart Sorensen
On Mon, May 16, 2005 at 10:55:33PM +0100, Rupert Heesom wrote:
 Thanks for your comprehensive reply!
 
 Using your post, I've managed to configure my raid 1 MD devices.  I am
 interested though in doing LVM later. I have done LVM in earlier days
 but without MD in the mixture.
 
 Is it possible to add LVM into my config when I need it.  The partitions
 I've created are:  /boot, swap, / [root].

You would have to somehow shrink / to make room for a partition to use
for lvm.  I tend to put swap inside the lvm since it saves a partition
entry and makes resizing swap much simpler later (just swapoff,
lvmresize the swap volume, mkswap, and swapon again).

Once you have lvm, you can just about move things around as you wish
inside the lvm since you aren't limited by start and end positions on
the drive anymore the way partitions are.  You still have to deal with
the filesystem resize at the same time as the volume is resized but
there are tools that handle that for you.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Using Alioth Opteron net install to config MD raid1

2005-05-13 Thread Rupert Heesom
I'm using an Alioth net install CD to put Debian onto a new dual opteron
PC using 2 SATA drives.

I'm getting confused during the drive setup process.

What I'm wanting to do is use the raid 1 setup with each disk having 3
partitions (as in a workstation install):  root, swap, home.

How do I setup md to work with these partitions?

I seem to be able to use EITHER the 3-partition structure OR the RAID 1
structure (the install puts 1 ext3 partition into the RAID1 device).

Can I do what I want to with this install?

If I am not able to change these options, can I at least change the ext3
file system to an XFS system?  (I think XFS is cool!)

-- 
Rupert Heesom [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using Alioth Opteron net install to config MD raid1

2005-05-13 Thread Lennart Sorensen
On Fri, May 13, 2005 at 01:05:21PM +0100, Rupert Heesom wrote:
 I'm using an Alioth net install CD to put Debian onto a new dual opteron
 PC using 2 SATA drives.
 
 I'm getting confused during the drive setup process.
 
 What I'm wanting to do is use the raid 1 setup with each disk having 3
 partitions (as in a workstation install):  root, swap, home.
 
 How do I setup md to work with these partitions?
 
 I seem to be able to use EITHER the 3-partition structure OR the RAID 1
 structure (the install puts 1 ext3 partition into the RAID1 device).
 
 Can I do what I want to with this install?
 
 If I am not able to change these options, can I at least change the ext3
 file system to an XFS system?  (I think XFS is cool!)

I just went through the hassle of converting a couple of partitions from
XFS to ext3 (i386 running 2.6 kernel) because I had so frequent crashes
where xfs leaked so many buffers the OS ran out of ram and died.

No more XFS for me for a long time, at least when running nfs and samba
of it on top of LVM and MD raid.

With my setup, it seems ext3 has about 5 times the throughput of xfs,
and the meta data access is probably 50 times faster than xfs.
Something is really wrong with xfs in 2.6.5-2.6.10.  Not sure about
2.6.11 yet, and I hopefully won't have to find out now that I managed to
change to ext3 instead.

As for the setup, I have done this:

rceng02:~# fdisk -l /dev/sd[ab]

Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
   /dev/sda1   *   1  16  128488+  fd  Linux raid autodetect
   /dev/sda2  17366329294527+  fd  Linux raid autodetect
   /dev/sda33664   30401   214772985   fd  Linux raid autodetect

Disk /dev/sdb: 250.0 GB, 250059350016 bytes
55 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
   /dev/sdb1   *   1  16  128488+  fd  Linux raid autodetect
   /dev/sdb2  17366329294527+  fd  Linux raid autodetect
   /dev/sdb33664   30401   214772985   fd  Linux raid autodetect

rceng02:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
  29294400 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
  128384 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
  214772864 blocks [2/2] [UU]

md0 is mounted as /boot with ext2, md1 is mounted as / with ext3 and md2
is a pv device for lvm with 3 lv's inside like this:

rceng02:~# pvscan
  PV /dev/md2   VG MainVG   lvm2 [204.82 GB / 0free]
  Total: 1 [204.82 GB] / in use: 1 [204.82 GB] / in no VG: 0 [0   ]
rceng02:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group MainVG using metadata type lvm2
rceng02:~# lvscan
  ACTIVE'/dev/MainVG/Swap' [2.00 GB] inherit
  ACTIVE'/dev/MainVG/Home' [20.00 GB] inherit
  ACTIVE'/dev/MainVG/Data' [182.82 GB] inherit

I didn't like / in LVM yet (although now that grub supports raid for
/boot I have a much simpler setup than I used to have).  It might be
possible to just have /boot as one raid and then the rest of the disk as
another raid with lvm running to allocate space to each volume as
needed.  Being able to add space and resize volumes and filesystems
easily is really nice, so lvm is great.

I suspect you can make partitions on md devices but I never had much
luck with it since there aren't actually device names allocated for
that, so it is simpler to use lvm to make partitions on an md device and
have a few md devices for special partitions (like /boot).

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]