Re: Problem with RAID1 on kernel 2.4

2002-02-28 Thread I. Forbes
Hello Russell 

Yes it was "nr-spare-disks 1"

I just cut and copied setup from another machine and edited to 
illustrate my message.  I missed the spare disks.  :-(

At least raidtools2 shouts very quickly when you do that (I know!).

Thanks

Ian


On 27 Feb 2002, at 15:14, Russell Coker wrote:

> On Wed, 27 Feb 2002 14:53, you wrote:
> > when it should have been
> >
> > raiddev /dev/md0
> >   raid-level1
> >   nr-raid-disks 2
> >   nr-spare-disks0
> 
> Surely that should be "nr-spare-disks 1"?
> 
> >   chunk-size4
> >   persistent-superblock 1
> >   device/dev/hda5
> >   raid-disk 0
> >   device/dev/hdc5
> >   failed-disk 1
> >   device/dev/hde5
> >   spare-disk   0
> >
> > NB note the last line of each block.
> >
> > The man page shows and example but it is not clear on how the
> > index numbers should be set.
> 
> The man page for mdctl is worse...  :(
> 
> -- 
> If you send email to me or to a mailing list that I use which has >4 lines
> of legalistic junk at the end then you are specifically authorizing me to do
> whatever I wish with the message and all other messages from your domain, by
> posting the message you agree that your long legalistic sig is void.
> 


-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-




Re: Problem with RAID1 on kernel 2.4

2002-02-28 Thread I. Forbes

Hello Russell 

Yes it was "nr-spare-disks 1"

I just cut and copied setup from another machine and edited to 
illustrate my message.  I missed the spare disks.  :-(

At least raidtools2 shouts very quickly when you do that (I know!).

Thanks

Ian


On 27 Feb 2002, at 15:14, Russell Coker wrote:

> On Wed, 27 Feb 2002 14:53, you wrote:
> > when it should have been
> >
> > raiddev /dev/md0
> >   raid-level1
> >   nr-raid-disks 2
> >   nr-spare-disks0
> 
> Surely that should be "nr-spare-disks 1"?
> 
> >   chunk-size4
> >   persistent-superblock 1
> >   device/dev/hda5
> >   raid-disk 0
> >   device/dev/hdc5
> >   failed-disk 1
> >   device/dev/hde5
> >   spare-disk   0
> >
> > NB note the last line of each block.
> >
> > The man page shows and example but it is not clear on how the
> > index numbers should be set.
> 
> The man page for mdctl is worse...  :(
> 
> -- 
> If you send email to me or to a mailing list that I use which has >4 lines
> of legalistic junk at the end then you are specifically authorizing me to do
> whatever I wish with the message and all other messages from your domain, by
> posting the message you agree that your long legalistic sig is void.
> 


-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Problem with RAID1 on kernel 2.4

2002-02-27 Thread I. Forbes
Hello Russell 

Thanks for your comments.  

On 26 Feb 2002, at 11:32, Russell Coker wrote:

> > 2)  Then I had endless problems with raid1.  It seems that the
> > "failed-disk" directive in /etc/raidtab does not work.  I think
> > it has something to do with devfs - which is compiled into the
> > standard "woody" 2.4 kernel.
> 
> No.  failed-disk has always worked fine for me with devfs.

I have not been able to reproduce the problem again.  However I 
think I had the index values in the raidtab file wrong.  

I had  

raiddev /dev/md0
  raid-level1
  nr-raid-disks 2
  nr-spare-disks0
  chunk-size4
  persistent-superblock 1
  device/dev/hda5
  raid-disk 0
  device/dev/hdc5
  failed-disk 1
  device/dev/hde5
  spare-disk   3

when it should have been  

raiddev /dev/md0
  raid-level1
  nr-raid-disks 2
  nr-spare-disks0
  chunk-size4
  persistent-superblock 1
  device/dev/hda5
  raid-disk 0
  device/dev/hdc5
  failed-disk 1
  device/dev/hde5
  spare-disk   0

NB note the last line of each block.

The man page shows and example but it is not clear on how the 
index numbers should be set.  

I have not had a chance to rebuild the raid to see if this was in fact 
my problem.  The server is running and serving web pages ...  

And yes, I am using raidtools2!

Thanks  

Ian  

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-




Re: Problem with RAID1 on kernel 2.4

2002-02-27 Thread I. Forbes

Hello Russell 

Thanks for your comments.  

On 26 Feb 2002, at 11:32, Russell Coker wrote:

> > 2)  Then I had endless problems with raid1.  It seems that the
> > "failed-disk" directive in /etc/raidtab does not work.  I think
> > it has something to do with devfs - which is compiled into the
> > standard "woody" 2.4 kernel.
> 
> No.  failed-disk has always worked fine for me with devfs.

I have not been able to reproduce the problem again.  However I 
think I had the index values in the raidtab file wrong.  

I had  

raiddev /dev/md0
  raid-level1
  nr-raid-disks 2
  nr-spare-disks0
  chunk-size4
  persistent-superblock 1
  device/dev/hda5
  raid-disk 0
  device/dev/hdc5
  failed-disk 1
  device/dev/hde5
  spare-disk   3

when it should have been  

raiddev /dev/md0
  raid-level1
  nr-raid-disks 2
  nr-spare-disks0
  chunk-size4
  persistent-superblock 1
  device/dev/hda5
  raid-disk 0
  device/dev/hdc5
  failed-disk 1
  device/dev/hde5
  spare-disk   0

NB note the last line of each block.

The man page shows and example but it is not clear on how the 
index numbers should be set.  

I have not had a chance to rebuild the raid to see if this was in fact 
my problem.  The server is running and serving web pages ...  

And yes, I am using raidtools2!

Thanks  

Ian  

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Problem with RAID1 on kernel 2.4

2002-02-26 Thread Russell Coker
One thing I forgot to mention, make sure you use raidtools2 not raidtools!

-- 
Signatures >4 lines are rude.  If you send email to me or to a mailing list
that I am subscribed to which has >4 lines of legalistic junk at the end
then you are specifically authorizing me to do whatever I wish with the
message (the sig won't be read).




Re: Problem with RAID1 on kernel 2.4

2002-02-26 Thread Russell Coker
On Tue, 26 Feb 2002 10:41, I. Forbes wrote:
> 1)  The initrd is massive about 3mB, I hope that means I will always
> have all the modules I will ever need at boot time, and I assume
> the RAM is freed up by the time the system is running.  I
> increased the size of my boot partition to 15 mB, but otherwise
> this is not really a problem.

I've posted to debian-devel about how to solve this.  Let me know if you 
can't find my posting in the archives and I'll dig it out again.

Basically I've got 1M compressed initrd's even with a 1M SE Linux policy 
database (it's 3M uncompressed including the policy database).

The main problem here is the libc is big (and busybox-static doesn't work 
properly).

> Notwithstanding the above, I put a long list of modules in both
> /etc/modules and /etc/mkinitrd/modules.  (ide stuff, md, raid1,
> ext2 ext3 etc), I am not sure how much of this was necessary.

You definately don't want both ext2 and ext3.

> 2)  Then I had endless problems with raid1.  It seems that the
> "failed-disk" directive in /etc/raidtab does not work.  I think
> it has something to do with devfs - which is compiled into the
> standard "woody" 2.4 kernel.

No.  failed-disk has always worked fine for me with devfs.

> proc/mdstat shows the drives with their devfs names not the old
> /dev/hd.. names.

When you compile devfs into the kernel most things will report the devfs 
names.

> I tried installing debian's devfsd package but did not solve
> the problem.  Maybe there is some clever customization required
> to make it work.

devfsd only creates compatibility symlinks and changes permissions when devfs 
is mounted.  When devfs is not mounted and if you configure devfsd to not 
mount it automatically then devfsd will exit silently.  If you don't rely on 
compatibility symlinks and you are running as root then devfsd does not do 
anything you really NEED anyway.

> Putting the full devfs names into /etc/raidtab did not work.
> Maybe I did not have everything setup correctly or I got the
> names wrong.  I could not find any devfs devices in the /dev
> directory.

So devfs is not mounted, so you just use the old style names.

> I am not sure how much is related to the chipset, or whether this is a
> known issue with kernel 2.4.  In hindsight, I should have compiled a
> new kernel without initrd or devfs and made all the raid and ide
> modules built in.  I actually tried this but after two or three
> compilations without getting a kernel with the right configuration, I
> thought doing it the other way might be faster.
>
> Has anybody else been down this road yet?

Yes.  I've posted a number of messages in debian-{isp,devel} about what I've 
done.  I've got everything you list working to my satisfaction.

It's not all documented and it's not all easy, but it's all possible.

-- 
Signatures >4 lines are rude.  If you send email to me or to a mailing list
that I am subscribed to which has >4 lines of legalistic junk at the end
then you are specifically authorizing me to do whatever I wish with the
message (the sig won't be read).




Problem with RAID1 on kernel 2.4

2002-02-26 Thread I. Forbes
Hi All

I have just spent many hours trying to setup raid1 on a machine with 
an hpt366/htp370 ide chipset.

The machine has 3 ide hard drives as raid 1 + 1 hot spare, and a 
CD Rom, each device has its own IDE interface.

The chipset has 4 ide ports and is supported on kernel 2.4.  The 
chipset has raid "features" but as I understand it these are 
implemented via a software disk driver, typically on Windows.  
There are patches for kernel 2.2 and some weird drivers from the 
manufactures web site which I think do the same under Linux.

However kernel 2.4 has native support for the chipset and the other 
development seems to have stopped.  With 2.4 running I was 
presented with /dev/hda, dev/hdc, /dev/hde, /dev/hdg for the drives.  
I installed linux raid1 for raid support.

I installed a standard debian 2.4.17 kernel and just enough 
packages out of woody to get it going.  The rest is potato.  After a 
long night I think have got it all going.  However there are some 
areas that I am still not sure of:

1)  The initrd is massive about 3mB, I hope that means I will always
have all the modules I will ever need at boot time, and I assume
the RAM is freed up by the time the system is running.  I
increased the size of my boot partition to 15 mB, but otherwise
this is not really a problem. 

Notwithstanding the above, I put a long list of modules in both
/etc/modules and /etc/mkinitrd/modules.  (ide stuff, md, raid1,
ext2 ext3 etc), I am not sure how much of this was necessary. 

2)  Then I had endless problems with raid1.  It seems that the
"failed-disk" directive in /etc/raidtab does not work.  I think
it has something to do with devfs - which is compiled into the
standard "woody" 2.4 kernel. 

proc/mdstat shows the drives with their devfs names not the old
/dev/hd.. names.  

While all the other directives seemed to work, using standard
/dev/hd.. names and I could build the raid, if I did a raidstop,
followed by raidstart, it would not start again.  Rather it gave
me an error relating to the partition listed as "failed-disk". 
The only way to get it running again was with a mkraid
--really-force option. 

I tried installing debian's devfsd package but did not solve
the problem.  Maybe there is some clever customization required
to make it work. 

Putting the full devfs names into /etc/raidtab did not work. 
Maybe I did not have everything setup correctly or I got the
names wrong.  I could not find any devfs devices in the /dev
directory. 

After lots of manipulation I managed to build a working system
from a single disk to raid1 on all partitions, without relying
on failed-disk, and it all seems to be working now. 

I am not sure how much is related to the chipset, or whether this is a 
known issue with kernel 2.4.  In hindsight, I should have compiled a 
new kernel without initrd or devfs and made all the raid and ide 
modules built in.  I actually tried this but after two or three 
compilations without getting a kernel with the right configuration, I 
thought doing it the other way might be faster.

Has anybody else been down this road yet?


Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-




Re: Problem with RAID1 on kernel 2.4

2002-02-26 Thread Russell Coker

One thing I forgot to mention, make sure you use raidtools2 not raidtools!

-- 
Signatures >4 lines are rude.  If you send email to me or to a mailing list
that I am subscribed to which has >4 lines of legalistic junk at the end
then you are specifically authorizing me to do whatever I wish with the
message (the sig won't be read).


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Problem with RAID1 on kernel 2.4

2002-02-26 Thread Russell Coker

On Tue, 26 Feb 2002 10:41, I. Forbes wrote:
> 1)  The initrd is massive about 3mB, I hope that means I will always
> have all the modules I will ever need at boot time, and I assume
> the RAM is freed up by the time the system is running.  I
> increased the size of my boot partition to 15 mB, but otherwise
> this is not really a problem.

I've posted to debian-devel about how to solve this.  Let me know if you 
can't find my posting in the archives and I'll dig it out again.

Basically I've got 1M compressed initrd's even with a 1M SE Linux policy 
database (it's 3M uncompressed including the policy database).

The main problem here is the libc is big (and busybox-static doesn't work 
properly).

> Notwithstanding the above, I put a long list of modules in both
> /etc/modules and /etc/mkinitrd/modules.  (ide stuff, md, raid1,
> ext2 ext3 etc), I am not sure how much of this was necessary.

You definately don't want both ext2 and ext3.

> 2)  Then I had endless problems with raid1.  It seems that the
> "failed-disk" directive in /etc/raidtab does not work.  I think
> it has something to do with devfs - which is compiled into the
> standard "woody" 2.4 kernel.

No.  failed-disk has always worked fine for me with devfs.

> proc/mdstat shows the drives with their devfs names not the old
> /dev/hd.. names.

When you compile devfs into the kernel most things will report the devfs 
names.

> I tried installing debian's devfsd package but did not solve
> the problem.  Maybe there is some clever customization required
> to make it work.

devfsd only creates compatibility symlinks and changes permissions when devfs 
is mounted.  When devfs is not mounted and if you configure devfsd to not 
mount it automatically then devfsd will exit silently.  If you don't rely on 
compatibility symlinks and you are running as root then devfsd does not do 
anything you really NEED anyway.

> Putting the full devfs names into /etc/raidtab did not work.
> Maybe I did not have everything setup correctly or I got the
> names wrong.  I could not find any devfs devices in the /dev
> directory.

So devfs is not mounted, so you just use the old style names.

> I am not sure how much is related to the chipset, or whether this is a
> known issue with kernel 2.4.  In hindsight, I should have compiled a
> new kernel without initrd or devfs and made all the raid and ide
> modules built in.  I actually tried this but after two or three
> compilations without getting a kernel with the right configuration, I
> thought doing it the other way might be faster.
>
> Has anybody else been down this road yet?

Yes.  I've posted a number of messages in debian-{isp,devel} about what I've 
done.  I've got everything you list working to my satisfaction.

It's not all documented and it's not all easy, but it's all possible.

-- 
Signatures >4 lines are rude.  If you send email to me or to a mailing list
that I am subscribed to which has >4 lines of legalistic junk at the end
then you are specifically authorizing me to do whatever I wish with the
message (the sig won't be read).


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Problem with RAID1 on kernel 2.4

2002-02-26 Thread I. Forbes

Hi All

I have just spent many hours trying to setup raid1 on a machine with 
an hpt366/htp370 ide chipset.

The machine has 3 ide hard drives as raid 1 + 1 hot spare, and a 
CD Rom, each device has its own IDE interface.

The chipset has 4 ide ports and is supported on kernel 2.4.  The 
chipset has raid "features" but as I understand it these are 
implemented via a software disk driver, typically on Windows.  
There are patches for kernel 2.2 and some weird drivers from the 
manufactures web site which I think do the same under Linux.

However kernel 2.4 has native support for the chipset and the other 
development seems to have stopped.  With 2.4 running I was 
presented with /dev/hda, dev/hdc, /dev/hde, /dev/hdg for the drives.  
I installed linux raid1 for raid support.

I installed a standard debian 2.4.17 kernel and just enough 
packages out of woody to get it going.  The rest is potato.  After a 
long night I think have got it all going.  However there are some 
areas that I am still not sure of:

1)  The initrd is massive about 3mB, I hope that means I will always
have all the modules I will ever need at boot time, and I assume
the RAM is freed up by the time the system is running.  I
increased the size of my boot partition to 15 mB, but otherwise
this is not really a problem. 

Notwithstanding the above, I put a long list of modules in both
/etc/modules and /etc/mkinitrd/modules.  (ide stuff, md, raid1,
ext2 ext3 etc), I am not sure how much of this was necessary. 

2)  Then I had endless problems with raid1.  It seems that the
"failed-disk" directive in /etc/raidtab does not work.  I think
it has something to do with devfs - which is compiled into the
standard "woody" 2.4 kernel. 

proc/mdstat shows the drives with their devfs names not the old
/dev/hd.. names.  

While all the other directives seemed to work, using standard
/dev/hd.. names and I could build the raid, if I did a raidstop,
followed by raidstart, it would not start again.  Rather it gave
me an error relating to the partition listed as "failed-disk". 
The only way to get it running again was with a mkraid
--really-force option. 

I tried installing debian's devfsd package but did not solve
the problem.  Maybe there is some clever customization required
to make it work. 

Putting the full devfs names into /etc/raidtab did not work. 
Maybe I did not have everything setup correctly or I got the
names wrong.  I could not find any devfs devices in the /dev
directory. 

After lots of manipulation I managed to build a working system
from a single disk to raid1 on all partitions, without relying
on failed-disk, and it all seems to be working now. 

I am not sure how much is related to the chipset, or whether this is a 
known issue with kernel 2.4.  In hindsight, I should have compiled a 
new kernel without initrd or devfs and made all the raid and ide 
modules built in.  I actually tried this but after two or three 
compilations without getting a kernel with the right configuration, I 
thought doing it the other way might be faster.

Has anybody else been down this road yet?


Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]