RE: help!

2000-08-14 Thread Gregory Leblanc

Check out the section on that in the Software-RAID HOWTO.  You can get it
from http://www.LinuxDoc.org/HOWTO/Software-RAID-HOWTO-6.html#ss6.1  but a
mirror closer to you might be better.
Greg

> -Original Message-
> From: Martin Brown [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 14, 2000 2:14 PM
> To: [EMAIL PROTECTED]
> Subject: help!
> 
> 2 of my 4 raid drives failed.. what's the best way to repair 
> the super block
> to see if I can get one of them to work?



RE: Problem with RAID1 at Boot time and DMA problem

2000-08-13 Thread Gregory Leblanc

> -Original Message-
> From: Matthias Koelle [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, August 13, 2000 3:53 AM
> To: [EMAIL PROTECTED]
> Subject: Problem with RAID1 at Boot time and DMA problem
> 
> Hi all,
> 
> first of all...sorry for my english ;)
> I had a strange problem at boot time. After a reboot it 
> disappeared, but any
> way I`m interested what happened, especially with this superblock
> inconsistency...
> [RAID1 System on 2.2.14,hda and hdc, Array was set up by RH installer]
> 
> md: superblock update time inconsistency = using most recent on
> freshest hda1
> request module [md_personality_3]: Root fs not mounted

Right here the kernel is trying to load a module to support RAID1.  Since
the root FS is on RAID1, it can't find the module.  You'll either need an
initrd, or to compile RAID support into the kernel.  
Check out http://www.LinuxDoc.org/HOWTO/Boot+Root+Raid+LILO.html or get a
copy from your local mirror.  

> do_md_run()returned -22
> unbind 
> export redv(hdc1)
> unbind 
> export_rdev(hda1)
> md stop
> ...autorun done
> Bad md_map in ll_rw_block
> EXT2fs unable to read superblock
> Bad md_map in ll_rw_block
> isofs_read_super: bread failed, dev=09:02, iso_blknu=16, block=32
> kernel panic: VFS: Unable to mount root fs on 09:02
> 
> Another problem...perhaps not really matching but perhaps 
> anyone can help
> me:
> 
> RAID1 Sytem on a 2.2.14 Kernel. The Raid Array was created during the
> installation by the RH distribution. There are 2 IDE Disks in 
> the Array (hda
> and hdc).
> 
> xxx kernel: hda: dma_intr: status=0x51 {DriveReady SeekCompleteError}
> xxx kernel: hda: dam_intr: error=0x84 {DriveStatusError BadCRC}
> ...a few times this error message, then:
> xxx kernel: hda: DMA disabled
> xxx kernel: ide0: reset: success
> 
> OKthe kernel detected problems with the DMA mode (which could
> probably cause data faults, etc.) and so disabled it. The 
> disks are both
> Maxtor92041U4s and the chipset is a Intel 440BX, so there should be no
> problem with the DMA mode. The IDE cables are also not very 
> long (which
> length do you recommend?). The only thing the hda differs 
> from the hdc is,
> that it is in a removable frame. Could that cause the 
> problems?  Or any
> other hdparm settings.

Depends on what kind of a chassis it is.  If it's an older chassis (as in
made before UDMA/XX was in wide use), then it may not handle the tolerances
for your drive/IDE controller.  I don't use IDE drives, so I can't recommend
any particular chassis, sorry.
Greg



FW: updates

2000-08-12 Thread Gregory Leblanc

Here's the announcement of the Linux-RAID FAQ being available from
LinuxDoc.org, and it's 200+ worldwide mirrors.  

> -Original Message-
> From: Greg Ferguson [mailto:[EMAIL PROTECTED]]
> Sent: Friday, August 11, 2000 7:07 PM
> To: [EMAIL PROTECTED]
> Subject: updates
> 
> 
> Linux-RAID FAQ
>     v0.0.5 11 August 2000
> Gregory Leblanc, <[EMAIL PROTECTED]>
> 
> This is a FAQ for the Linux-RAID mailing list, hosted on
> vger.rutgers.edu. It's intended as a supplement to the
> existing Linux-RAID HOWTO, to cover questions that keep
> occurring on the mailing list. PLEASE read this document
> before your post to the list.
> 
>   * NEW (FAQ) entry
>   http://www.linuxdoc.org/FAQ/Linux-RAID-FAQ/
> 
> 
> (as an FYI, the main FAQ index - http://www.linuxdoc.org/FAQ/)
> 
> r,
> 
> -- 
> Greg Ferguson - s/w engr / mtlhd | [EMAIL PROTECTED]
> SGI Tech Pubs - http://techpubs.sgi.com  | 
> Linux Doc Project - http://www.linuxdoc.org  |
> 
> 
> --  
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact 
> [EMAIL PROTECTED]
> 



RE: lilo issue

2000-08-11 Thread Gregory Leblanc

> -Original Message-
> From: Nick Kay [mailto:[EMAIL PROTECTED]]
> Sent: Friday, August 11, 2000 9:27 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: RE: lilo issue
> 
> >Can you show us your lilo.conf?  Do you have a default label 
> set?  Does
> >lilo-21.5 include RH's boot from RAID1 patch, or another 
> boot from RAID1
> >patch?  
> 
> No I don't have the default label set - I tend to like having the
> option of alternate kernels as a rescue mechanism. I guess I 
> don't have much choice in the matter this time though.

Unfortunately, in order to not break things, the default label must be set
to something, although I'm not sure what happens if you set it to something
invalid.  You can configure lilo so that it waits forever, regardless of
whether or not you have a default label specified.  The only thing that the
default label does in that configuration is to specify the kernel to boot if
you simply press .  Later,
Greg



RE: lilo issue

2000-08-11 Thread Gregory Leblanc

> -Original Message-
> From: Nick Kay [mailto:[EMAIL PROTECTED]]
> Sent: Friday, August 11, 2000 6:51 AM
> To: [EMAIL PROTECTED]
> Subject: lilo issue
> 
> Hi all,
>   I have my raid-1 up and running over two 9gig scsi
> drives under slackware 7.1 with kernel 2.2.16+raid-2.2.6-AO
> from Ingo and lilo-21.5.
>   After disconnecting the second drive (ID 1) and rebooting
> works fine, however pulling ID0 causes problems in that lilo comes
> up without any kernel labels and if left alone will do a floppy seek
> and return to the "boot:" prompt. Manually entering a kernel label
> will get the system up. After installing a fresh empty drive as ID0
> and running my rebuild script, the raid system appears to be fine
> but the behaviour of lilo is still broken. Rerunning lilo on the new
> config makes no difference - there are still no kernel labels and the
> system has to be booted manually.
>   Any ideas or pointers where to look??

Can you show us your lilo.conf?  Do you have a default label set?  Does
lilo-21.5 include RH's boot from RAID1 patch, or another boot from RAID1
patch?  
Greg



RE: Loss of a SCSI device and RAID

2000-08-10 Thread Gregory Leblanc

> -Original Message-
> From: Eric Z. Ayers [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 10, 2000 6:46 AM
> To: [EMAIL PROTECTED]
> Subject: Loss of a SCSI device and RAID
> 
> I've been wonder how this would work for a while.
> 
> I know that the Linux kernel auto-detects the SCSI devices on boots
> and assigns them
> 
> /dev/sda to the first one
> /dev/sdb to the second one ...
> 
> and so on.

Yep.  Lots of planning done there.  :-)

> Doesn't this put a kink in your plans if you remove a disk physically
> and then restart the system?  I mean, what if the failiure on the disk
> is something like smoke coming out of the drive bay and the next time
> you reboot the kernel doesn't even see the device?

If you're using just the SCSI drives, yes, it screws everything up.  

> Is there a way to hard code /dev/sda to Target ID N and /dev/sdb to
> Target ID M so that in case N fails, your old /dev/sdb doesn't show up
> as /dev/sda when you reboot?

Sort of.  There are some "devfs" patches that make the /dev filesystem MUCH
cleaner, and they keep disk at the same location, even when other disks are
removed.  It does break a few things though.  I don't think it currently
works with RAID, at least not on 2.2.x

> The setup I'm envisioning is a 2.2.16 kernel with the latest patches,
> a single SCSI bus with 2 hard drives in a RAID 1 configuration.  If it
> makes a difference, the system will NOT boot from these disks.

Well, with persistent superblocks, you don't have anything to worry about.
The kernel will just detect your RAID sets, and configure them.  Then, since
/etc/fstab is pointed at /dev/mdX rather than /dev/sdX, you don't have to
worry about SCSI drives changing.  HTH,
Greg



RE: Where to get uptodate raidtool/kernelpatches (2.2.1x) ?

2000-08-08 Thread Gregory Leblanc

> -Original Message-
> From: Thomas Waldmann [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, August 06, 2000 10:38 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Where to get uptodate raidtool/kernelpatches (2.2.1x) ?
> 
> > Sure enough, you've got an old version of the RAID code.  
> Hop over to
> > http://people.redhat.com/mingo/raid-patches/ .  I've had no 
> problems running
> > Alan Cox's 2.2.17pre13 code with the 2.2.17 patch from 
> Ingo.  The raidtools
> > 0.90 that you have should work fine, if you have any 
> concerns just grab the
> > raidtools source from Ingo's directory (the one that says 
> "dangerous").
> 
> Does anybody know WHY this version of raidtools is label as 
> "dangerous" and
> what should be the conclusion ? Shall we use older raidtools ???

Oh yeah, this one is a FAQ.  They're labeled "dangerous" because they aren't
released code, I think.  The version on Ingo's site is as new (and as safe)
as the RAIDtools 0.90 gets.  If nobody finds a better way to say that, then
it's going in the FAQ like that.  :-)
Greg



RE: Problems booting from RAID

2000-08-08 Thread Gregory Leblanc

> -Original Message-
> From: Jane Dawson [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 08, 2000 2:03 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Problems booting from RAID
> 
> Hi,
> 
> I am tearing my hair out over this stuff (I should have 
> mentioned that I am
> completely new to not only RAID but Linux in general).  This 
> is the first
> task I have been given in my new job - talk about being 
> thrown in at the
> deep end! :)

Yikes, I think I'd have killed somebody for that one.  :-)

> I've written to many lists for help with this and received various
> conflicting advice and suggestions (many thanks to those who 
> have written
> back so far!) So, I've decided the best plan of action is to 
> start again
> from scratch with blank hard disks, a new reinstallation and 
> to copy the
> setup as described in Boot + Root + Raid + Lilo HOWTO - 
> however I don't
> understand how to create a /boot partition as well as a root 
> partition.
>
> I would be eternally grateful if somebody could explain the 
> logic behind
> this as well as how to do it :)

My logic behind this (others may have different logic), is to help out with
stupid boot loader issues.  Most machines that I've used have some booting
issues where they can't read past a certain part of the drive.  I create a
smallish (20MB), /boot partition, right at the start of the drive (to make
reads fast, but seek slow).  Then I use the rest of the drive, broken into
various parts, for everything else I try to store.
You create the partitions just like you would any other, usually by using
some breed of fdisk, or perhaps a specialized partitioning tool.  Here's
part of the output from 'fdisk -l /dev/sda' and 'fdisk -l /dev/sdb' on my
RAID test server: 

[root@Socks /root]# fdisk -l /dev/sda

Disk /dev/sda: 255 heads, 63 sectors, 259 cylinders
Units = cylinders of 16065 * 512 bytes

   Device BootStart   EndBlocks   Id  System
/dev/sda1   * 1 9 72261   82  Linux swap
/dev/sda210   259   20081255  Extended
/dev/sda51013 32098+  fd  Linux raid autodetect
/dev/sda614   259   1975963+  fd  Linux raid autodetect
[root@Socks /root]# fdisk -l /dev/sdb

Disk /dev/sdb: 255 heads, 63 sectors, 261 cylinders
Units = cylinders of 16065 * 512 bytes

   Device BootStart   EndBlocks   Id  System
/dev/sdb1 1 9 72261   82  Linux swap
/dev/sdb210   261   20241905  Extended
/dev/sdb51013 32098+  fd  Linux raid autodetect
/dev/sdb614   259   1975963+  fd  Linux raid autodetect
[root@Socks /root]# 

Well, on this one, I didn't follow my own advice, but you can see what the
partitions are (actually, I think that RedHat wouldn't let me partition as I
would have liked).  Hopefully that answers your questions...
Greg



RE: disc drive cache

2000-08-08 Thread Gregory Leblanc

> -Original Message-
> From: Kay Salzwedel [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 08, 2000 5:46 AM
> To: [EMAIL PROTECTED]
> Subject: disc drive cache
> 
> Hello,
> 
> does anybody know how to find out about the size of a (EIDE) drives
> cache if the company 'lost' the data sheet?

Well, you can find it from dmesg, just grep for hd? (for whatever IDE device
you're looking for).  Mine reports:

PIIX4: IDE controller on PCI bus 00 dev 39
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:pio, hdd:pio
hda: FUJITSU MPD3130AT, ATA DISK drive
hdb: Hewlett-Packard CD-Writer Plus 7500, ATAPI CDROM drive
hdc: IOMEGA ZIP 100 ATAPI, ATAPI FLOPPY drive
hdd: CD-532E, ATAPI CDROM drive

You should also be able to get this from hdparm -i /dev/hd? (as above).
Again, here is the report from my workstation:

/dev/hda:

 Model=FUJITSU MPD3130AT, FwRev=DD-04-47, SerialNo=01009599
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
 BuffType=0(?), BuffSize=512kB, MaxMultSect=16, MultSect=off
 DblWordIO=no, OldPIO=2, DMA=yes, OldDMA=0
 CurCHS=16383/16/63, CurSects=-66060037, LBA=yes, LBAsects=25431840
 tDMA={min:120,rec:120}, DMA modes: mword0 mword1 mword2 
 IORDY=yes, tPIO={min:120,w/IORDY:120}, PIO modes: mode3 mode4 
 UDMA modes: mode0 mode1 *mode2 

Look for BuffSize in there, that's the cache size.  HTH,
Greg



RE: RAID questions

2000-08-07 Thread Gregory Leblanc

> -Original Message-
> From: Adam McKenna [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 07, 2000 9:27 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: RAID questions
> 
> On Mon, Aug 07, 2000 at 08:07:58PM -0700, Gregory Leblanc wrote:
> > I'm a little verbose, but this should answer most of your questions,
> > although sometimes in a slightly annoyed tone.  Don't take 
> it personally.
> 
> There's a difference between being annoyed and being 
> immature.  You seem to
> have answered everything with maturity, so no offense taken.

Phew.  Sometimes I come off poorly, and people flip out.  I hate that.  

> I did a search on google.  The majority of posts I was able 
> to find mentioned
> a 2.2.15 patch which could be applied to 2.2.16 as long as 
> several hunks were
> hand-patched.  Personally, I don't particularly like 
> hand-patching code.
> Especially when the data that my job depends on is involved.

Hmm, there have been some recently (I think) for the 2.2.16 kernels.  I've
not kept up on my testing (no time), so I'm still running 2.2.14.  

> > > 2)  The current 2.2.16 errata lists a problem with md.c which 
> > > is fixed by the
> > > patch "2.2.16combo".
> > 
> > I believe that md software RAID applies to the old RAID 
> code.  The RAID
> > stuff has been VERY good for quite a while now.
> 
> The howto on linux.org listed 
> ftp://www.fi.kernel.org/pub/linux/daemons/raid 
> as the "official" location for the RAID patches.  The patches 
> located there 
> only went up to 2.2.11.  In fact, looking now, the 
> linuxdoc.org howto lists
> the same location.

True enough, it's out of date.  I'm going to try to get Jacob to point to my
FAQ, but I haven't gotten enough feedback just yet.  

> the first place.  In retrospect, I suppose it was a stupid 
> question, but I'd 
> rather be safe than sorry.

There are no stupid questions, only stupid answers.  :-)  Amongst the other
45 kernel compiles I've got to do this week, I'll try to find some time to
look at the 2.2.16 patch, and see if it works nicely with Ingo's RAID patch
on my system.

> Thanks for the link.  However as mentioned above the howto 
> there still gives
> the incorrect location for current kernel patches.

Sorry, I can't fix that.  I just help put the HOWTOs online, I don't write
them (at least not much).

> > > So, I have the following questions.
> > > 
> > > 1)  Do I need to apply the RAID patch to 2.2.16 or not?
> > 
> > Do you want new RAID, or old RAID? 
> 
> Well, the box won't boot with the stock MD driver.

In that case, you need to patch your kernel.  :)  I think you mentioned that
you'd already found the 2.2.16 patch, so run with that, and see what
happens.

> > > 2)  If I do, will it still broken unless I apply the 
> > > "2.2.16combo" patch?

D'uh, I'll look and see what it does for me, and report back.  Probably NOT
tomorrow, but some time this week.  Maybe somebody else will step forward
with results before I get to it.

> I was hoping my post would serve as a reminder to those on 
> the list who are in
> charge of maintaining those resources.

I dunno, the kernel list just scares me.  There's too much extraneous stuff
that goes through there anyway, and 90% of it is over my head.  Speaking of
which, I'll trim them from the list, after this email (since somebody there
might have tried more patches than myself).

> > If you don't know what you're doing, GET A TEST MACHINE.  
> Sorry to yell, but
> > don't play with things on production boxes.  Find a nice 
> cheapie P-133 type
> > box, grab a couple of drives, and test out RAID that way.  
> Don't do that one
> > production boxes.  If somebody can't come up with $200 to 
> get you a test
> > box, then spring for it yourself, and get a decent X term for home.

[SNIP]
> My current prime objective is getting rid of the current kernel we are
> running, as I am having other problems with the box that I 
> think are kernel
> related.  (EAGAIN errors -- resource temporarily unavailable 
> when trying to 
> make a TCP connection to a remote host after about 5 days of 
> uptime)  A test 
> box would be nice but it could take weeks to obtain one.  
> Personally, I'd
> rather avoid having to go in at 2:30 in the morning again to 
> reboot the box.

Ah, sorry about that one, it might have been a little out of line.  However,
do get yourself a test box, doesn't even need to be the same hardware, just
something that you can break.  

> 
> I looked at geocrawler, b

RE: RAID questions

2000-08-07 Thread Gregory Leblanc

I'm a little verbose, but this should answer most of your questions,
although sometimes in a slightly annoyed tone.  Don't take it personally.

> -Original Message-
> From: Adam McKenna [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 07, 2000 12:10 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: RAID questions
> 
> Hello,
> 
> I consider the current state of affairs with Software-RAID to 
> be unbelievable.

It's not as bad as you think.  :-)

> 1)  The current RAID-Howto (on www.linux.org) does not 
> indicate the correct 
> location of RAID patches.  I had to go searching all over 
> the web to find
> the 2.2.16 RAID patch.

Did you try reading the archives for the Linux-RAID list?  I've started on a
FAQ that will be updated at very least monthly, and posted to linux-raid.

> 2)  The current 2.2.16 errata lists a problem with md.c which 
> is fixed by the
> patch "2.2.16combo".

I believe that md software RAID applies to the old RAID code.  The RAID
stuff has been VERY good for quite a while now.

> 3)  The patch "2.2.16combo" FAILS if the RAID patch has 
> already been applied.
> Ditto with the RAID patches to md.c if the 2.2.16combo 
> patch has already
> been applied.

Perhaps they're not compatible, or perhaps one includes the other?  Have you
looked at the patches to try to figure out why they don't work?  I'm NOT a
hacker, but I can certainly try to figure out why patches don't work.  

> 4)  The kernel help for all of the MD drivers lists a nonexistant
> Software-RAID mini-howto, which is supposedly located at
> ftp://metalab.unc.edu/pub/Linux/docs/HOWTO/mini.  There is no such
> document at this location.

There are 2 Software-RAID HOWTOs available there, although they are 1
directory higher than that URL.  For the code included in the stock kernels,
see ftp://metalab.unc.edu/pub/Linux/docs/HOWTO/Software-RAID-0.4x-HOWTO.
For the new RAID code by Ingo and others, see
ftp://metalab.unc.edu/pub/Linux/docs/HOWTO/Software-RAID-HOWTO.  Both of
these documents are easily available from http://www.LinuxDoc.org/

> 5)  The kernel help also does not make it clear that you even 
> need a RAID
> patch with current kernels.  It is implied that if you 
> "Say Y here" then
> your kernel will support RAID.  This problem is 
> exacerbated by the missing
> RAID patches at the location specified in the actual 
> Software-RAID-Howto.

No, you don't NEED to patch your kernel to get RAID (md raid, that is)
working.  You DO need to patch the kernel if you want the new RAID code.
Everyone on the Linux-RAID list will recommend the new code, I don't know
about anybody else.

> So, I have the following questions.
> 
> 1)  Do I need to apply the RAID patch to 2.2.16 or not?

Do you want new RAID, or old RAID? 

> 2)  If I do, will it still broken unless I apply the 
> "2.2.16combo" patch?

If you apply the combo patch, that will fix things with the old code (I
think, have not verified this yet).  If you apply the RAID patch (from the
location above), then you don't need to worry about the fixes in the
2.2.16combo.

> 3)  If it will, then how do I resolve the problem with the 
> md.c hunk failing 
> with "2.2.16combo"?

Apply manually?  Just take a look at the .rej files (from /usr/src/linux do
a 'find . -name "*rej*"') and see what failed to apply.  I generally open a
split pane editor, (for emacs, just put two file names on the command line),
and see if I can find where the patch failed, and try to add the
missing/remove the extraneous lines by hand.  It's worked so far.

> 4)  Is there someone I can contact who can update publically 
> available 
> documentation to make it easier for people to find what 
> they're looking 
> for?

Not sure about the stuff in the Linux kernel sources, but I'd assume that
somebody on the Linux-kernel list can do that.  As for the Software-RAID
HOWTO, tell Jacob (he IS on the raid list).  Again, I've created a FAQ for
the Linux-raid mailing list, which should cover many of these questions.
I'll be asking the list maintainer about putting a footer onto posts to the
list, but I'm not sure about the feasibility of that just yet.  

> This is a production system I am working on here.  I can't 
> afford to have it
> down for an hour or two to test a new kernel.  I'd rather not 
> be working with
> this mess to begin with, but unfortunately this box was 
> purchased before I
> started this job, and whoever ordered it decided that 
> software raid was
> "Good enough".

If you don't know what you're doing, GET A TEST MACHINE.  Sorry to yell, but
don't play with things on production boxes.  Find a nice cheapie P-133 type
box, grab a couple of drives, and test out RAID that way.  Don't do that one
production boxes.  If somebody can't come up with $200 to get you a test
box, then spring for it yourself, and get a decent X term for home.
As for Software RAID being good enough, I find that to be true.  If I needed
hot swap,

RE: raid-2.2.17-A0 cleanup for LVM

2000-08-07 Thread Gregory Leblanc

> -Original Message-
> From: Carlos Carvalho [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 07, 2000 10:57 AM
> To: Andrea Arcangeli
> Cc: [EMAIL PROTECTED]
> Subject: Re: raid-2.2.17-A0 cleanup for LVM
> 
>  >In 2.2.x that's not possible but for _very_ silly reasons.
> 
> So can't this be fixed?

I wouldn't expect it to be fixed.  2.4 is well on it's way, and seems to
have quite a few "silly" things fixed.

>  >On 2.4.x we now have a modular and recursive make_request 
> callback, that
>  >will allow us to handle all the volume management layering 
> correctly (so
>  >if raid5 on top of raid0 isn't working right now in 2.4.x send a bug
>  >report ;).
> 
> Yes, but it's useless because of the abysmal (absence of) speed. And
> all the VM problems... The machine I need raid50 on is a central
> server, if it stops everything else goes down. In fact I'm not using
> 2.4 on it precisely because of the VM/raid problems!! :-( :-(
> 
> If I can't do raid50 on our server I'll have to resort to raid10,
> losing 50% of our so expensive disks...

No, DASD (disks) are cheap, compared with other things, like upgrading the
processor(s) on your oracle or DB2 server.  If you're dealing with SCSI
(which you must be, for that many drives), and using RAID 5, speed can't be
that paramount.  Just put another drive on each bus.  I know, nobody likes
to spend money on disks, but they're cheaper than losing data.
Greg



RE: FAQ update

2000-08-07 Thread Gregory Leblanc

> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, August 05, 2000 6:08 AM
> To: Linux Raid list (E-mail)
> Subject: Re: FAQ update
> 
> [Luca Berra]
> > >The patches for 2.2.14 and later kernels are at
> > >http://people.redhat.com/mingo/raid-patches/. Use the 
> right patch for
> > >your kernel, these patches haven't worked on other 
> kernel revisions
> > >yet.
> > 
> > i'd add: dont use netscape to fetch patches from mingo's 
> site, it hurts
> > use lynx/wget/curl/lftp
> 
> Yes, *please* *please* *please*

I need some clarification on this.  I couldn't make lynx work, it chopped
off long lines or something.  wget works, I've never heard of the other two.
Why exactly is NetScrape bad?  That server load thing sounds fishy to me...

Greg



RE: FAQ

2000-08-07 Thread Gregory Leblanc

Here's one more update of the FAQ.  Assuming not too many objections, I'll
send it to Jacob, and see if I can contact the list owner and get a footer
onto this list.  
Greg

Linux-RAID FAQ

Gregory Leblanc

  gleblanc (at) cu-portland.edu
   
   Revision History

   Revision v0.03 7 August 2000 Revised by: gml
   Added a request to use a wget type program to fetch the patch. Tried
   to make things look a little bit better, failed miserably.

   Revision v0.02 4 August 2000 Revised by: gml
   Revised a the How do I patch? and the What does /proc/mdstat look
   like? questions.
   
   This is a FAQ for the Linux-RAID mailing list, hosted on
   vger.rutgers.edu. It's intended as a supplement to the existing
   Linux-RAID HOWTO, to cover questions that keep occurring on the
   mailing list. PLEASE read this document before your post to the list.
 _
   
   1. General
  
1.1. Where can I find archives for the linux-raid mailing list?

   2. Kernel
  
2.1. I'm running [insert your linux distribution here]. Do I need
to patch my kernel to make RAID work?

2.2. How can I tell if I need to patch my kernel?
2.3. Where can I get the latest RAID patches for my kernel?
2.4. How do I apply the patch to a kernel that I just downloaded
from ftp.kernel.org?

1. General

   1.1. Where can I find archives for the linux-raid mailing list?
   
   My favorite archives are at Geocrawler.
   http://www.geocrawler.com/lists/3/Linux/57/0/
   
   Other archives are available at
   http://marc.theaimsgroup.com/?l=linux-raid&r=1&w=2
   
   Another archive site is
   http://www.mail-archive.com/linux-raid@vger.rutgers.edu/
   
2. Kernel

   2.1. I'm running [insert your linux distribution here]. Do I need to
   patch my kernel to make RAID work?
   
   Well, the short answer is, it depends. Distributions that are keeping
   up to date have the RAID patches included in their kernels. The kernel
   that RedHat distributes, as do some others. If you download a 2.2.x
   kernel from ftp.kernel.org, then you will need to patch your kernel.
   
   2.2. How can I tell if I need to patch my kernel?
   
   The easiest way is to check what's in /proc/mdstat. Here's a sample
   from a 2.2.x kernel, with the RAID patches applied.

 [gleblanc@grego1 gleblanc]$ cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid5] [translucent]
 read_ahead not set
 unused devices: 

   If the contents of /proc/mdstat looks like the above, then you don't
   need to patch your kernel.
   
   Here's a sample from a 2.2.x kernel, without the RAID patches applied.

[root@finch root]$ cat /proc/mdstat
Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive

   If your /proc/mdstat looks like this one, then you need to patch your
   kernel.
   
   2.3. Where can I get the latest RAID patches for my kernel?
   
   The patches for the 2.2.x kernels up to, and including, 2.2.13 are
   available from ftp.kernel.org. Use the kernel patch that most closely
   matches your kernel revision. For example, the 2.2.11 patch can also
   be used on 2.2.12 and 2.2.13.
   
   The patches for 2.2.14 and later kernels are at
   http://people.redhat.com/mingo/raid-patches/. Use the right patch for
   your kernel, these patches haven't worked on other kernel revisions
   yet. Please use something like wget/curl/lftp to retrieve this patch,
   as it's easier on the server than using a client like Netscape.
   Downloading patches with Lynx has been unsuccessful for me; wget may
   be the easiest way.
   
   2.4. How do I apply the patch to a kernel that I just downloaded from
   ftp.kernel.org?
   
   First, unpack the kernel into some directory, generally people use
   /usr/src/linux. Change to this directory, and type patch -p1 <
   /path/to/raid-version.patch.
   
   On my RedHat 6.2 system, I decompressed the 2.2.16 kernel into
   /usr/src/linux-2.2.16. From /usr/src/linux-2.2.16, I type in patch -p1
   < /home/gleblanc/raid-2.2.16-A0. Then I rebuild the kernel using make
   menuconfig and related builds.



RE: Problems booting from RAID

2000-08-07 Thread Gregory Leblanc

> -Original Message-
> From: Jane Dawson [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 07, 2000 3:13 AM
> To: [EMAIL PROTECTED]
> Subject: Problems booting from RAID
> 
> Hi, 
> 
> I decided to set up a completely RAID-based system using two 
> identical IDE
> hard disks, each with 3 partitions (boot, swap and data). 

Be careful about swap on RAID.  There are lots of details in the archives,
but SWAP on RAID during reconstruction is a really bad thing, so don't let
it happen.

> But I am having appalling problems in getting the machine to boot from
> RAID! I've been through the Software-RAID-HOWTO so many times 
> I can almost
> recite it, but still things aren't going as they should.
> 
> Does anyone have any pointers as to where I'm going wrong? At 
> the moment,
> all six paritions are set in 'cfdisk' to type 'fd'. What 
> should I put in
> lilo.conf, etc. ?

Probably need to patch lilo, and then have a look at the LDP document on the
subject.  
http://www.LinuxDoc.org/HOWTO/Boot+Root+Raid+LILO.html

This one looked good, but I haven't dug into it very much, as RH works
wonderfully to install onto RAID.  
Greg



RE: owie, disk failure

2000-08-07 Thread Gregory Leblanc

> > > disks are less than two weeks old, although I have heard 
> of people 
> > > having similar problems (disks failing in less than a 
> month new from 
> > > the factory) with this brand and model   I would like 
> to get the 
> 
> In my experience 95% of drive failures occur in the first couple of
> weeks. If they get out of this timeframe, then I find they 
> usually last
> for a long time. I don't think this is a failing of this brand and/or
> model.

Well, from the drives that I've had, they either fail after a few weeks, or
after several years (like 5+).  Almost never in between.  We keep a spare
drive of each size around anyway.  :-)

> > To check and see if the drive is actually in good 
> condition, grab the
> > diagnostic utility from the support site of your drive manufacturer,
> > boot from a DOS floppy, and run diagnostics on the drive.
> 
> I have to confess I've never heard of manufacturers offering 
> diagnostic
> utilities for disks... Gregory, can you point me at any examples? Am I
> just being a complete dumbass here?

Yes, you are.  :-)  From Maxtor's site (since I just RM'd a drive last week)
(http://www.maxtor.com/) click on software download.  Right on that page is
info about the MaxDiag utility.  It does a little more than badblocks and
friends, at least for IDE drives.  It will return drive specific error
codes, and if you've run all of those tests by the time you call support,
you can just give them the error numbers, and they issue an RMA.  The other
nice feature is that it gives you the tech support number to call as soon as
it shows the error.  :-)

> > In order for them to replace my drives, I've had to do "write"
> > testing, which destroys all data on the drive, so you may 
> want to disconnect
> > power from one of the drives before you play around with that.
> 
> If you don't trust yourself to get the right disk for a write 
> test then
> you need to do this. However, if you check *EXACTLY* what you 
> are doing
> before running a write-test, then I don't see any reason to 
> go so far as
> to unplug the disks. YMMV.

Well, that's true, but if you don't trust yourself to get the right drive,
then you should unplug the one that still has the data intact.  Depending on
the value of the data, it may be worth unplugging it just for safety's sake,
although if it's that important, it should be backed up.  Later,
Greg



RE: owie, disk failure

2000-08-06 Thread Gregory Leblanc

> -Original Message-
> From: Jeffrey Paul [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, August 06, 2000 5:56 PM
> To: [EMAIL PROTECTED]
> Subject: owie, disk failure
> 
> h, the day i had hoped would never arrive has...
> 
> Aug  2 07:38:27 chrome kernel: raid1: Disk failure on hdg1, 
> disabling device.
> Aug  2 07:38:27 chrome kernel: raid1: md0: rescheduling block 8434238
> Aug  2 07:38:27 chrome kernel: md0: no spare disk to reconstruct 
> array! -- continuing in degraded mode
> Aug  2 07:38:27 chrome kernel: raid1: md0: redirecting sector 8434238 
> to another mirror
> 
> my setup is a two-disk (40gb each) raid1 configuration... hde1 and 
> hdg1.   I didn't have measures in place to notify me of such an 
> event, so I didnt notice it until i looked at the console today and 
> noticed it there...

I think I ran for about 2 weeks on a dead drive.  Thankfully it wasn't a
production system, but notification isn't quite as "out of the box" as it
needs to be just yet.

> I ran raidhotremove /dev/md0 /dev/hdg1 and then raidhotadd /dev/md0 
> /dev/hdg1 and it seemed to begin reconstruction:
> 
> but I got scared and decided to stop it...  so now it's sitting idle 
> unmounted spun down (both disks) awaiting professional advice (rather 
> than me stumbling around in the dark before i hose my data).   Both 
> disks are less than two weeks old, although I have heard of people 
> having similar problems (disks failing in less than a month new from 
> the factory) with this brand and model   I would like to get the 
> drives back to the way the were before the system decided that the 
> disk had failed (what causes it to think that, anyways?) and see if 
> it continues to work, as I find it hard to believe that the drive 
> would have died so quickly.   What is the proper course of action?

First, do you have ANY log messages from anything other than RAID indicating
a failed disk?  Since these are IDE drives, I'd expect some messages from
the IDE subsystem if the drive really had died (my SCSI messages went pretty
wild when I had a disk fail).  To check and see if the drive is actually in
good condition, grab the diagnostic utility from the support site of your
drive manufacturer, boot from a DOS floppy, and run diagnostics on the
drive.  In order for them to replace my drives, I've had to do "write"
testing, which destroys all data on the drive, so you may want to disconnect
power from one of the drives before you play around with that.  If the disk
is good, then you're all set.  If not, get it replaced.  I've seen drives
fail very quickly before, it's always been a manufacturing defect of some
kind.  HTH,
Greg



FAQ update

2000-08-04 Thread Gregory Leblanc

Here's a new version, with a couple of changes.  What other questions get
asked all the time?
Greg

Linux-RAID FAQ

Gregory Leblanc

  gleblanc (at) cu-portland.edu
   
   Revision History
   Revision v0.02 4 August 2000 Revised by: gml
   Revised a the How do I patch? and the What does /proc/mdstat look
   like? questions.
   Revision v0.01 31 July 2000  Revised by: gml
   Initial draft of this FAQ.
   
   This is a FAQ for the Linux-RAID mailing list, hosted on
   vger.rutgers.edu. It's intended as a supplement to the existing
   Linux-RAID HOWTO, to cover questions that keep occurring on the
   mailing list. PLEASE read this document before your post to the list.
 _
   
   1. General
  
1.1. Where can I find archives for the linux-raid mailing list?

   2. Kernel
  
2.1. I'm running the DooDad Linux Distribution. Do I need to
patch my kernel to make RAID work?

2.2. How can I tell if I need to patch my kernel?
2.3. Where can I get the latest RAID patches for my kernel?
2.4. How do I apply the patch to a kernel that I just downloaded
from ftp.kernel.org?

1. General

   1.1. Where can I find archives for the linux-raid mailing list?
   
   My favorite archives are at Geocrawler.
   
   Other archives are available at
   http://marc.theaimsgroup.com/?l=linux-raid&r=1&w=2
   
   Another archive site is
   http://www.mail-archive.com/linux-raid@vger.rutgers.edu/.
   
2. Kernel

   2.1. I'm running the DooDad Linux Distribution. Do I need to patch my
   kernel to make RAID work?
   
   Well, the short answer is, it depends. Distributions that are keeping
   up to date have the RAID patches included in their kernels. The kernel
   that RedHat distributes, as do some others. If you download a 2.2.x
   kernel from ftp.kernel.org, then you will need to patch your kernel.
   
   2.2. How can I tell if I need to patch my kernel?
   
   The easiest way is to check what's in /proc/mdstat. Here's a sample
   from a 2.2.x kernel, with the RAID patches applied.

[gleblanc@grego1 gleblanc]$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [translucent]
read_ahead not set
unused devices: 
[gleblanc@grego1 gleblanc]$

   If the contents of /proc/mdstat looks like the above, then you don't
   need to patch your kernel.
   
   Here's a sample from a 2.2.x kernel, without the RAID patches applied.

[root@finch root]$ cat /proc/mdstat
Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive

   If your /proc/mdstat looks like this one, then you need to patch your
   kernel.
   
   2.3. Where can I get the latest RAID patches for my kernel?
   
   The patches for the 2.2.x kernels up to, and including, 2.2.13 are
   available from ftp.kernel.org. Use the kernel patch that most closely
   matches your kernel revision. For example, the 2.2.11 patch can also
   be used on 2.2.12 and 2.2.13.
   
   The patches for 2.2.14 and later kernels are at
   http://people.redhat.com/mingo/raid-patches/. Use the right patch for
   your kernel, these patches haven't worked on other kernel revisions
   yet.
   
   2.4. How do I apply the patch to a kernel that I just downloaded from
   ftp.kernel.org?
   
   First, unpack the kernel into some directory, generally people use
   /usr/src/linux. Change to this directory, and type patch -p1 <
   /path/to/raid-version.patch.
   
   On my RedHat 6.2 system, I decompressed the 2.2.16 kernel into
   /usr/src/linux-2.2.16. From /usr/src/linux-2.2.16, I type in patch -p1
   < /home/gleblanc/raid-2.2.16-A0. Then I rebuild the kernel using make
   menuconfig and related builds.



RE: Patched Kernel

2000-08-04 Thread Gregory Leblanc

Oooh ooh ooh!  FAQ!  :-)  Unfortunately, it's not on the web yet, but here's
the relevant section:

   2.2. How can I tell if I need to patch my kernel?
   
   The easiest way is to check what's in /proc/mdstat. Here's a sample
   from a 2.2.x kernel, with the RAID patches applied.

[gleblanc@grego1 gleblanc]$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [translucent]
read_ahead not set
unused devices: 
[gleblanc@grego1 gleblanc]$

   If the contents of /proc/mdstat looks like the above, then you don't
   need to patch your kernel.
   
   Here's a sample from a 2.2.x kernel, without the RAID patches applied.

[root@finch root]$ cat /proc/mdstat
Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive

   If your /proc/mdstat looks like this one, then you need to patch your
   kernel.

HTH,
Greg

> -Original Message-
> From: Felix Leder [mailto:[EMAIL PROTECTED]]
> Sent: Friday, August 04, 2000 2:11 PM
> To: linux raid mailing list
> Subject: Patched Kernel
> 
> 
> How does a patched Kernel's /proc/mdstat look like without any
> raid-drives configured?
> 



RE: FAQ

2000-08-03 Thread Gregory Leblanc

> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 03, 2000 10:35 AM
> To: [EMAIL PROTECTED]
> Subject: Re: FAQ
> 
> [Marc Mutz]
> > >2.4. How do I apply the patch to a kernel that I just 
> downloaded from
> > >ftp.kernel.org?
> > > 
> > >Put the downloaded kernel in /usr/src. Change to this 
> directory, and
> > >move any directory called linux to something else. 
> Then, type tar
> > >-Ixvf kernel-2.2.16.tar.bz2, replacing 
> kernel-2.2.16.tar.bz2 with your
> > >kernel. Then cd to /usr/src/linux, and run patch -p1 < 
> raid-2.2.16-A0.
> > >Then compile the kernel as usual.
> > 
> > Your tar is too customized to be in a FAQ.
> 
> there is no bzip2 standard in gnu tar, so let's be 
> intelligent and avoid
> the issue by going with the .gz tarball as a recommendation.  -z is
> standard.

It's going to be changed to the POSIX tar and GNU gzip invoked separately,
because everybody felt the need to bitch, and because people aren't smart
enough to not send me two copies of the message.  :-)

> Also, none of the tarballs will start with "kernel-" but "linux-"
> anyway, so that needs fixing.  Also, I'd add "/path/to/" before the
> raid in the patch command, since otherwise we'd need to tell them to
> move the patch over to that directory (pedantic, yes, but still)

ok, cool, I'll fix those.  

> oh, and "move any directory called linux to something else" seems to
> miss the possibility of a symlink, where renaming the symlink would
> be kind of pointless.  Whether tar would just kill the symlink at
> extract time anyway is worth a check.

Tar likes to clobber things when I give it half a chance.  I'll mention
about the symlink a bit more, although perhaps I should just tell people
that they're expected to be familiar with downloading, unpacking, and
building kernels before they read this document.
Greg



FAQ

2000-08-02 Thread Gregory Leblanc

Here's a quickie FAQ, it's very incomplete, but I wanted to get some
feedback on what I've got right now.  Thanks,
Greg

Linux-RAID FAQ

Gregory Leblanc

  gleblanc (at) cu-portland.edu
   
   Revision History
   Revision v0.01 31 July 2000 Revised by: gml
   Initial draft of this FAQ.
   
   This is a FAQ for the Linux-RAID mailing list, hosted on
   vger.rutgers.edu. It's intended as a supplement to the existing
   Linux-RAID HOWTO, to cover questions that keep occurring on the mailing
   list. PLEASE read this document before your post to the list.
 _
   
   1. General
  
1.1. Where can I find archives for the linux-raid mailing list?

   2. Kernel
  
2.1. I'm running the DooDad Linux Distribution. Do I need to
patch my kernel to make RAID work?

2.2. How can I tell if I need to patch my kernel?
2.3. Where can I get the latest RAID patches for my kernel?
2.4. How do I apply the patch to a kernel that I just downloaded
from ftp.kernel.org?

1. General

   1.1. Where can I find archives for the linux-raid mailing list?
   
   My favorite archives are at Geocrawler.
   
   Other archives are available at
   http://marc.theaimsgroup.com/?l=linux-raid&r=1&w=2
   
   Another archive site is
   http://www.mail-archive.com/linux-raid@vger.rutgers.edu/.
   
2. Kernel

   2.1. I'm running the DooDad Linux Distribution. Do I need to patch my
   kernel to make RAID work?
   
   Well, the short answer is, it depends. Distributions that are keeping
   up to date have the RAID patches included in their kernels. The kernel
   that RedHat distributes, as do some others. If you download a 2.2.x
   kernel from ftp.kernel.org, then you will need to patch your kernel.
   
   2.2. How can I tell if I need to patch my kernel?
   
   The easiest way is to check what's in /proc/mdstat. Here's a sample
   from a 2.2.x kernel, with the RAID patches applied.
[gleblanc@grego1 gleblanc]$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [translucent]
read_ahead not set
unused devices: 
[gleblanc@grego1 gleblanc]$


   If the contents of /proc/mdstat looks like the above, then you don't
   need to patch your kernel.
   
   I'll get a copy of something from an UN-patched 2.2.x kernel and put
   it here shortly. If your /proc/mdstat looks like this one, then you
   need to patch your kernel.
   
   2.3. Where can I get the latest RAID patches for my kernel?
   
   The patches for the 2.2.x kernels up to, and including, 2.2.13 are
   available from ftp.kernel.org. Use the kernel patch that most closely
   matches your kernel revision. For example, the 2.2.11 patch can also
   be used on 2.2.12 and 2.2.13.
   
   The patches for 2.2.14 and later kernels are at
   http://people.redhat.com/mingo/raid-patches/. Use the right patch for
   your kernel, these patches haven't worked on other kernel revisions
   yet.
   
   2.4. How do I apply the patch to a kernel that I just downloaded from
   ftp.kernel.org?
   
   Put the downloaded kernel in /usr/src. Change to this directory, and
   move any directory called linux to something else. Then, type tar
   -Ixvf kernel-2.2.16.tar.bz2, replacing kernel-2.2.16.tar.bz2 with your
   kernel. Then cd to /usr/src/linux, and run patch -p1 < raid-2.2.16-A0.
   Then compile the kernel as usual.



RE: Looking for Archive of messages sent to linux-raid@vger.rutgers.edu

2000-07-31 Thread Gregory Leblanc

> -Original Message-
> From: Anthony Di Paola [mailto:[EMAIL PROTECTED]]
> Sent: Monday, July 31, 2000 9:18 AM
> To: [EMAIL PROTECTED]
> Subject: Looking for Archive of messages sent to
> [EMAIL PROTECTED]
> 
> Does anyone keep an archive of the messages sent to this list 
> accessible
> via the web?  I've been looking for one for some time now without any
> luck.

FAQ Question number 1:  Where can I find archives for the Linux-RAID mailing
list?
FAQ Answer number 1:  There are several different archives, but I generally
use http://www.geocrawler.com/.
Greg

P.S.  If anybody has more FAQ, or other archives, let me know.  I'll try to
keep a FAQ and post it here, say, once a month?



RE: Question on disk benchmark and fragmentation

2000-07-27 Thread Gregory Leblanc

> -Original Message-
> From: Corin Hartland-Swann [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, July 27, 2000 7:53 AM
> To: Gregory Leblanc
> Cc: Holger Kiehl; [EMAIL PROTECTED]
> Subject: RE: Question on disk benchmark and fragmentation
> 
[snip]
> When I was comparing performance of RAID0+1 to RAID5 there was a big
> difference in how quickly (as per number of threads) they ground to a
> halt. Here's an example:
> 
> ./tiobench.pl --size 256 --dir /mnt/md3/ --block 4096 --threads 1
>--threads 2 --threads 4 --threads 16 --threads 32 --threads 64
>--threads 128 --threads 256

I'd recomend that you add a --numruns 4 (or more, if you like) to these
options.  This will help to get more consistent numbers, which you are NOT
getting judging from the first WRITE performance number in the first two
tests.  I believe that I found 4 was a good numruns for me, YMMV.
Grego



RE: Question on disk benchmark and fragmentation

2000-07-27 Thread Gregory Leblanc

> -Original Message-
> From: Corin Hartland-Swann [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, July 27, 2000 7:15 AM
> To: Holger Kiehl
> Cc: [EMAIL PROTECTED]
> Subject: Re: Question on disk benchmark and fragmentation
> 
> Holger,
> 
> On Thu, 27 Jul 2000, Holger Kiehl wrote:
> > Will this not influence the performance a lot since the head of
> > the disk has to walk all over the disk? Thus making comparisons
> > practically useless since you never know the state of fragmentation?
> 
> I think you've got it exactly right here. Whenever I do a 
> benchmark on a
> disk, I follow the following basic plan:
> 
> 1) Use a freshly formatted disk
> 2) Disable all but 32M RAM
> 3) Switch to single-user mode
> 4) Measure performance at start (maximum) and end (minimum)
> 5) On large disks (>20G or so), try the first 1G and the last 
> 1G by using
>fdisk to create partitions there
> 6) Use tiotest, NOT bonnie! Try multiple threads (I use 1, 2, 
> 4, 8, 16,
>32, 64, 128, 256 threads - this is perhaps excessive!)

What size datasets are you using?  Bonnie++ is still a good benchmark,
although it stresses things differently.  The maximum number of threads that
you should need to (or probably even want to) run is between 2x and 3x the
number of disks that you have installed.  That should ensure that every
drive is pulling 1 piece of data, and that there is another thread that is
waiting for data while that one is being retrieved.  

> > Hope this is not to much off topic.
> 
> I think it's pretty important when talking about RAID, since 
> a lot of us
> are using it for performance reasons, with the redundancy as an added
> bonus.

Heh, I'm using it because it provides redundancy, the added speed from
Mika's RAID 1 read balancing patch is just a perk...  HTH,
Grego



RE: raid and 2.4 kernels

2000-07-27 Thread Gregory Leblanc

> -Original Message-
> From: Danilo Godec [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, July 27, 2000 12:22 AM
> To: Neil Brown
> Cc: [EMAIL PROTECTED]
> Subject: RE: raid and 2.4 kernels
> 
> On Thu, 27 Jul 2000, Neil Brown wrote:
> 
> > If raid on 2.4 is fast than raid in 2.2, we say "great".
> > If it is slower, we look at the no-raid numbers.
> > If no-raid on 2.4 is slow than no-raid on 2.2, we say "oh dear, the
> > disc subsystem is slower on 2.4", and point the finger 
> appropriately.
> > If no-raid on 2.2 is fast than no-raid on 2.4, then we say 
> "Hmm, must
> > be a problem with raid" and point the finger there.
> > 
> > Does that make sense?
> 
> In a way, yes. But raid could depend on other parts of the kernel more
> heavily then no-raid disk access and thus could be more affected by
> errors/problems in those parts.

Not really, if you structure the test correctly.  Take a FAST system, with a
FAST bus, and a couple of disks that can't come close to saturating the bus
(either memory, disk, or interface (probably PCI)), then you could perform a
realistic test.  I haven't got the time or the drives to do that right now,
but it certainly would be nice.  I'll take a look at the other posted
results a bit later, and see if I can't make a few suggestions.  It would be
nice to see a similar test done with tiobench as well as with Bonnie++...
Later,
Grego



RE: raid and 2.4 kernels

2000-07-26 Thread Gregory Leblanc

> -Original Message-
> From: Neil Brown [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 26, 2000 7:41 PM
> To: Anton
> Cc: [EMAIL PROTECTED]
> Subject: Re: raid and 2.4 kernels
> 
> On Wednesday July 26, [EMAIL PROTECTED] wrote:
> > do the kernel developers responsible for RAID read this 
> list?  I would be
> > interested in seeing some constructive discussion about the 
> reports of
> > degraded RAID performance in the 2.4 kernels.  It is particularly
> > disappointing given that SMP appears to be a lot better in 
> 2.4 vs 2.2
> 
> As Jakob mentioned,  the slow down is quite possibly related to other
> parts of the kernel.
> I would really like to see speed comparisons for
>  
>   2.2 no raid
>   2.2 raid
>   2.4 no raid
>   2.4 raid
> 
> where 2.3+raid didn't fit the pattern, before I looked too deeply.
> 
> Given the code at the moment, I am highly confident that linear, raid0
> and raid1 should be just as fast in 2.4 as in 2.2.
> There are some issues with raid5 that I am looking into. I don't
> know that they affect speed much, though they might.

Could you be a little more specific?  Speed comparisons on disk access?
Then you can't compare RAID with no RAID effectively.  You could compare the
speed of 2.2/2.4 RAID, and 2.2/2.4 no RAID, but comparisons across would
seem to be meaningless.  Later,
Grego



RE: speed and scaling

2000-07-18 Thread Gregory Leblanc

Enough with the vulgarities.  This doesn't really belong on the RAID list
any longer, but I'll make a few points below.

> -Original Message-
> From: Marc Mutz [mailto:[EMAIL PROTECTED]]
> 
> > > > The alphas we have here have the same number of slots.
> > > But not only one bus. They typically have 3 slots/bus.
> > 
> > There are multiple pci bus x86 motherboards. Generally 
> found on systems
> > with >6 slots. I have seen x86 motherboards with 3 PCI 
> buses, 
> 
> I'd like to see how the x86 memory subsystem can saturate 
> three (or only
> two) 533MB/sec 64/66 PCI busses and still have the bandwidth 
> to compute
> a 90MB/sec stream of data.

A P-III memory subsystem is capable of probably 800MB/sec, I doubt that it
can handle more than that.  Alphas and SPARCs have more than that, but you
pay through the nose for it.  It's also worth noting that x86 shares memory
bandwidth when you do SMP (800MB/sec between two processors), where the EV6
Alphas have a switched memory bus.  I haven't investigated that beyond
reading a couple of papers, but you can find more of that on Compaq's
website.

> > but the most
> > ive seen on alpha or sparc is 2.
> 
> I never denied that such beasts exist. I just wanted to point 
> out that a
> x86 machine with those mobos would come close in price to the alpha
> solution.
> I simply can't imagine that there are no alpha boxen with more than 2
> PCI busses. If I had a faster internet connection now, I'd 
> check the web
> site of alpha-processor Inc.

http://www.compaq.com/alphaserver/gs80/index.html  This isn't even the top
of the line Alpha, but it has 16 (that's F in hex, or 20 in octal) PCI
busses.  Again, you PAY for that kind of machine.  I'm sure that the top of
the Alpha line must be close to as expensive as a Sun Ultra Enterprise
1, which goes for about seven figures.  Now that we know that you can
get bigger machines out of Alpha/SPARC than you can out of "stock" x86
machines, and that you have to pay for that kind of performance, can we get
back to talking about RAID on Linux please?
Grego



RE: speed and scaling

2000-07-10 Thread Gregory Leblanc

> -Original Message-
> From: Seth Vidal [mailto:[EMAIL PROTECTED]]
> Sent: Monday, July 10, 2000 12:23 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: speed and scaling
> 
> So were considering the following:
> 
> Dual Processor P3 something.
> ~1gb ram.
> multiple 75gb ultra 160 drives - probably ibm's 10krpm drives
> Adaptec's best 160 controller that is supported by linux. 
> 
> The data does not have to be redundant or stable - since it can be
> restored from tape at almost any time.
> 
> so I'd like to put this in a software raid 0 array for the speed.
> 
> So my questions are these:
>  Is 90MB/s a reasonable speed to be able to achieve in a raid0 array
> across say 5-8 drives?

Assuming sequential reads, you should be able to get this from good drives.

> What controllers/drives should I be looking at?

I'm not familiar with current top-end drives, but you should be looking for
at least 4MB of cache on the drives.  I think the best drives that you'll
find will be able to deliver 20MB/sec without trouble, possibly a bit more.
I seem to remember somebody on this liking Adaptec cards, but nobody on the
SPARC lists will touch the things.  I might look at a Tekram, or a Symbios
based card, I've heard good things about them, and they're used on some of
the bigger machines that I've worked with.  Later,
Grego



RAID, persistent superblock on SPARC

2000-07-09 Thread Gregory Leblanc

What's the current status of RAID on SPARC?  I haven't had a chance to keep
up very much, as I wasn't using RAID on SPARCs.  I'm about to build a
mirrored system here, and I'd like to make sure that I'm not going to get
hosed because of some bug.  Thanks,
Grego

|---|
| Windows NT has detected that there were no errors |
| for the past 10 minutes. The system will now try  |
| to restart or crash. Click the OK button to   |
| continue. |
|  < Ok >   |
|---|
(sigline nicked from Jayan M on comp.os.linux.misc) 



RE: Dell PERC2/SC

2000-07-04 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, July 04, 2000 8:33 PM
> To: Michael Ghens
> Cc: [EMAIL PROTECTED]
> Subject: Re: Dell PERC2/SC
> 
> On Tue, 4 Jul 2000, Michael Ghens wrote:
> 
> > Also how big of a raid partition is possible? I have a data 
> partition need
> > of 36 gigs plus.
> 
> No problem.  They make individual drives bigger than that 
> now.  You can
> supposedly have ext2 partitions up to a few terabytes.  If you fill a
> partition that big and the system crashes, fsck time might be 
> a good time
> to schedule your next vacation :)

And since you'll probably be on an x86 system, you'll be limited to file
sizes of 2GB.  That might suck for a database server, and it certainly sucks
for my VMware workstation, but for internet and file/print servers, it
shouldn't be a problem.
Grego



RE: problem with superblock

2000-06-28 Thread Gregory Leblanc

> -Original Message-
> From: Anton [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 28, 2000 1:21 PM
> To: Gregory Leblanc
> Cc: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
> Subject: RE: problem with superblock
> 
> So I have a total of 7 disks in the system, 6 of which are in 
> the RAID and
> I need to know that if I want to swap sdb1, which physical disk do I
> replace... I know that there is a utility for windows which makes the
> drive's LED flash.

I don't know of any utility to do this, but I never bother anyway.  You can
figure out the SCSI ID of the drive that you need to remove by looking at
/proc/scsi.  The second SCSI drive will be sdb, and you can find the SCSI ID
from there.  Once you know the SCSI ID, remove that drive.  
Grego



RE: problem with superblock

2000-06-28 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 27, 2000 5:00 PM
> To: [EMAIL PROTECTED]
> Subject: Re: problem with superblock
> 
> ## Betreff  : problem with superblock
> ## Ersteller: [EMAIL PROTECTED]   (Anton)
> 
> a> And is how do you map
> a> names like sdb1 to the physical disk?
> It is the first disk on the second SCSI-Controller.

Uhm, no, it's not.  The stock Linux kernel maps SCSI drives in the order
that it finds them.  The first SCSI disk is /dev/sda, the second is
/dev/sdb, the third, /dev/sdc, and so on.  /dev/sdb1 is the first partition
on the second SCSI drive.  If you add another SCSI disk that the kernel
finds earlier, then that disk will no longer be /dev/sdb, but some other
disk.  Persistent superblocks make sure that your RAID arrays can start up
even when you change the number of SCSI disks in your system.
Grego



RE: performance statistics for RAID?

2000-06-27 Thread Gregory Leblanc

> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 27, 2000 6:37 PM
> To: Linux Raid list (E-mail)
> Subject: Re: performance statistics for RAID?
> 
> [Gregory Leblanc]
> > Is there any chance of keeping track of these with software RAID?
> 
> AFAIK, sct's patch to give sar-like data out of /proc/partitions gives
> all of the above stats and more... neat patch :)  The user-space tool
> should be in the same dir.  And, FWIW, I get asked about how 
> people can
> get a "sar" for Linux *very* often by the SCO people here at work.

Since I had a hard time finding this patch, it's available here:
ftp://ftp.uk.linux.org:/pub/linux/sct/fs/profiling/
As soon as we get a replacement drive for the server at work, we'll be
trying that out (software RAID kept the server running, but I can't do much
RAID testing on just 1 drive).  If anybody has worked with this, or gets it
working before I do, let me know what you turn up.
Greg



performance statistics for RAID?

2000-06-27 Thread Gregory Leblanc

I just read that message from James Manning on some performance tuning, and
it made me think about this.  On some of our RAID controllers, they collect
statistics for the RAID volumes.  The one that I'm thinking of collects
things like this, except that I've trimmed some of the irrelevant
information.

Reads   Writes
1   KB 26537  190557
2   KB 16084   56161
4   KB118645  756926
8   KB 61132  110969
16  KB 75669   34512
32  KB1329244567
64  KB324735   10850
128 KB 0   0
256 KB 0   0
512 KB 0   0
1   MB+0   0
Total 755726 1164542


The statistics report a few other things, but from looking at numbers like
these, I could grab them into a spreadsheet program like Gnumeric,
manipulate them a bit, and figure out what the "optimal" sizes for things on
my RAID array are.
Is there any chance of keeping track of these with software RAID?  Somewhere
like /proc/mdinfo/ or even another mdSTUFF file in or, or possibly just
tacked onto the end of the mdstat that's there already.  Now, I can see
somebody asking about performance considerations here, but it's storing a
tiny amount of data, just the number of reads and writes for each size block
of data.  Does this sound like something that would be useful?  Can I pay
somebody to write it?  :-)  Thanks,
Grego



RE: Raid level 0 on an SMP Sparc 20

2000-06-27 Thread Gregory Leblanc

> -Original Message-
> From: Robert [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 27, 2000 10:22 AM
> To: [EMAIL PROTECTED]
> Subject: Raid level 0 on an SMP Sparc 20
> 
> I am converting a Sparc-station 20 to linux, specifically 
> RedHat 6.2.  I
> have been having lots of problems with getting my RAID array 
> to work.  I
> have experience with RAID-0 and RAID-1 working fine on Intel 
> processors.
> This is my first attempt at using Linux (and RAID) on a Sparc.  I have
> tried many different setups to get the RAID to work.  Some of 
> them worked
> occasionally.  All of them were either intermittant or failed 
> to work at
> all.

There are (or at least were in 2.2.14/15) some endian-ness issues with
persistent superblocks.  There was a patch, and a location to download the
patch, posted to this list a while ago, unfortunately I didn't keep that
mail, so you'll have to check the archives.
Grego



RE: Looking for drivers for DPT I2O card.

2000-06-25 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, June 25, 2000 7:36 PM
> To: Alvin Starr
> Cc: [EMAIL PROTECTED]
> Subject: Re: Looking for drivers for DPT I2O card.
> 
> On Sun, 25 Jun 2000, Alvin Starr wrote:
> 
> > I am trying to get a DPT I2O card running with RH 6.2. Does 
> anybody have
> > any pointers or suggestions?
> 
> SmartRAID V?  I think you'll have to find[1] their web site 
> and download a
> driver from them.  
> 
> [1] I don't know if it's a misconfiguration or a result of 
> Adaptec buying
> DPT, but www.dpt.com is not what it used to be.

Head straight for their ftp site, ftp.dpt.com.  Much easier to find things
there, and last I checked they didn't have all of the drivers listed on the
website.  At least they'll be integrated into the 2.4.x kernel, if I'm not
mistaken.
Grego



RE: Installation of RAID

2000-06-23 Thread Gregory Leblanc

> -Original Message-
> From: Dimitri SZAJMAN [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 23, 2000 7:16 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Installation of RAID
> 
> Hi this is me again I have a last question :
> 
> So I'm gonna install monday a linux RH 6.2.
> The computer will have 2 UW2 SCSI HDD (let's say HDD1 and HDD2, both
> 9Gb) and I want to do software RAID 1.
> Do I have something specific to do when installing my RH ? 
> Should I say
> somewhere that I want a RAID system ? Should I do it later, after
> installation ? What partitions should I do ? The same on both 
> HDD ? Like
> one for /, one for /home, one for swap ?
> 
> Like :
> 
> HDD1 : / = 2 Go ; /home = 7.8 Go ; swap = 200 Mo
> HDD2 : / = 2 Go ; /home = 7.8 Go ; swap = 200 Mo

The RH GUI has options for creating RAID partitions.  I tend to create a
20MB partition for /boot, then a partition for swap, then another partition
for /.  I haven't tried Swap on RAID yet, but there are ways to make it
work.  Try looking through the archives, a google search will find them
(I've forgotten the URL).  Later,
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Gregory Leblanc

> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 23, 2000 12:36 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
[snip]
> > > What version of raidtools should I use against a stock 2.2.16
> > > system with raid-2.2.16-A0 patch running raid1?
> > 
> > The 0.90 ones.  I think that Ingo has some tools in the 
> same place as the
> > patches, those should be the right tools.  I'll bet that 
> the Software-RAID
> > HOWTO tells where to get the latest tools.  You can find it at
> > http://www.LinuxDoc.org/
> > Greg
> 
> I think you mean the only raid tools there
> people.redhat.com/mingo/raid-patches/raidtools-dangerous-0.90-
> 2116.tar.gz?
> 
> I'm a bit sceptical about using something that's labled dangerous.
> What is so dangerous about it and is there any more chance 
> that it will
> break
> something than the standard release RH 6.2 raid tools?

Nah, RedHat ships with a variant of these tools.  You could probably
(somebody check me here) use the tools that ship with RH6.2 and have them
work just fine.
Greg



RE: autostart with raid5 over raid0?

2000-06-21 Thread Gregory Leblanc

> -Original Message-
> From: Carlos Carvalho [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 2:19 PM
> To: [EMAIL PROTECTED]
> Subject: autostart with raid5 over raid0?
> 
> Hi all,
> 
> I've been using raid5 with auto-detection for over a year without
> problems. Everything including the root fs is on raid5, the machine
> boots from floppy.
> 
> I now want to rearrange the disks in raid0 arrays, and make a raid5 of
> these. Will auto-detection/autostart work in this case? It should in
> theory...

Nope.  RAID code doesn't support layering of RAID right now.  There was a
special case for RAID 1 over 0 (or the other way around?), but it turns out
that it didn't quite work properly.  So not only will autodetect not work
correctly, it won't work at all.  :-(  I don't know what the plans are for
this in 2.4, but it would definately be cool.
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

> -Original Message-
> From: Diegmueller, Jason (I.T. Dept) [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 10:46 AM
> To: 'Gregory Leblanc'; 'Hugh Bragg'; [EMAIL PROTECTED]
> Subject: RE: Benchmarks, raid1 (was raid0) performance
> 
> : > Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> : > applying mingo's raid-2.2.16-A0 patch?
> : 
> : I don't see any reason not to apply it, although I haven't 
> : tried it with 2.2.16.
> 
> I have been out of the linux-raid world for a bit, but a 
> two-drive RAID1 installation yesterday has brought me back.  
> Naturally, when I saw mention of radi1readbalance, I immediately
> tried it.
> 
> I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
> is showing no change in read performance.  I am using IDE drives,
> but they are on separate controllers (/dev/hda, and /dev/hdc) 
> with both drives configured as masters.
> 
> Anyone have any tricks up their sleeves?

None offhand, but can you post your test configuration/parameters?  Things
like test size, relavent portions of /etc/raidtab, things like that.  I know
this should be a whole big list, but I can think of all of them right now.
FYI, I don't do IDE RAID (or IDE at all), but it's pretty awesome on SCSI.
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 5:04 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
> improves read performance right? At what cost?

Only the cost of patching your kernel, I think.  This patch does some nifty
tricks to help pick which disk to read data from, and will double the read
rates from RAID 1, assuming that you don't saturate the bus.  

> Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> applying mingo's raid-2.2.16-A0 patch?

I don't see any reason not to apply it, although I haven't tried it with
2.2.16.

> What version of raidtools should I use against a stock 2.2.16
> system with raid-2.2.16-A0 patch running raid1?

The 0.90 ones.  I think that Ingo has some tools in the same place as the
patches, those should be the right tools.  I'll bet that the Software-RAID
HOWTO tells where to get the latest tools.  You can find it at
http://www.LinuxDoc.org/
Greg



RE: Hardware Raid 5

2000-06-20 Thread Gregory Leblanc

> -Original Message-
> From: Gustavo Aguiar [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 20, 2000 10:17 AM
> To: Gregory Leblanc; linux-raid
> Subject: Re: Hardware Raid 5
> 
> Hi, 
> Does this means that I can't use Hardware Raid 5 on linux 
> (redhat 6.2) or this 
> means that I can't use Raid 5 on linux ? (I have a 
> adaptec aaa7869) Can I 
> use raid 5 on linux made by software ??? 

Neither.  You cannot use Adaptec "hardware" RAID, of any level, because it's
not hardware RAID.  Software RAID using the Linux kernel RAID drivers should
work fine.

Greg



RE: Hardware Raid 5

2000-06-19 Thread Gregory Leblanc

> -Original Message-
> From: Listas [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 19, 2000 2:22 PM
> To: Gregory Leblanc
> Cc: linux-raid
> Subject: Re: Hardware Raid 5
> 
> Hi,
> 
> It is a AAA133 from adaptec. I didn't found any thing saying  
> about it at
> Hardware Compatibility List.

Adaptec does not have any hardware RAID cards (at least none that are called
adaptec cards).  The AAA133 (and the rest of the AAA series) are SCSI cards
that come with RAID software drivers for NT and Netware, maybe for a few
other OSs, not for Linux.  RAID features are not supported on Linux, and I'm
somewhat surprised that you even got it to be recognized as a SCSI
controller.  Adaptec has said that they don't want this card to be supported
under Linux, and have refused to release the information needed to make good
drivers for it.  Anyway, if you want to use this card, get NT or Netware.
If you want a hardware RAID card, look elsewhere.  Perhaps Mylex or DPT.
Greg



RE: Hardware Raid 5

2000-06-19 Thread Gregory Leblanc

> -Original Message-
> From: Listas [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 19, 2000 12:25 PM
> To: linux-raid
> Subject: Hardware Raid 5
> 
> Hi All,
> 
> I'm having a big problem trying to install a RedHat 6.2 with a raid 5
> made by hardware. A found
> at the net som doc's that says that after mounting my Array (3 system
> disks seagate of 18G
> and 1 spare), I initialize my installation and I don't have a single
> disk to make the installation.
> I still have 4 disk's (sda, sdb, sdd, sdc). What I'm doint rong ?? Any
> help ??

What kind of a RAID controller are you using?  Is it on the list of
"supported" hardware from RedHat?
Greg



RE: devfs? 2.2.14?

2000-06-19 Thread Gregory Leblanc

> -Original Message-
> From: Alfred de Wijn [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 16, 2000 8:53 AM
> To: Linux Raid mailing list
> Subject: devfs? 2.2.14?
> 
> So my question is: is there a patch for devfs-kernels? What 
> kernel version
> is the latest patch against?
> 
> I figure with a litte effort I could probably modify the 
> 2.2.11 patch to
> work with 2.2.14 devfs, but I seem to have ended up in a part of the
> universe where time is hard to come by.

Yeah, I hate that "moving target" thing that Linux does with disks.  I don't
know whether or not RAID works with devfs, I haven't gotten around to trying
it just yet.  There are patches for the newer 2.2.x stock kernels at
http://www.redhat.com/~mingo/, you'll have to try them to find out if they
work with devfs.
Greg



RE: raid linear and chunk-size

2000-06-19 Thread Gregory Leblanc

> -Original Message-
> From: Jure Pecar [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 19, 2000 12:47 AM
> To: [EMAIL PROTECTED]
> Subject: raid linear and chunk-size
> 
> Hi all,
> 
> Just noticed this while building some raid linear for my cd 
> toaster box:
> Software-Raid-HOWTO, shipped with raidtools, tells you that chunk 
> size is not nedded for raid linear. Well, if you try mkraid 
> for that raid 
> linear "array" (2.2.16 w/ 2.2.16-A0 patch), it just complains about 
> missing chunk-size. Adding chunk-size in raidtab for solves the 
> problem.
> Is it just me that feels the need for updated documentation? Or at 
> least make it clear what goes to raid 0.43 / 0.52, and what belongs 
> to 0.90.

I think chunk-size is required by the way that the raidtools parse raidtab,
but that it doesn't actually affect anything.  
Greg



RE: raid 0, dev/hd? too small (0kb) error in RH 6.2

2000-06-17 Thread Gregory Leblanc

First, have you read the Software RAID HOWTO?  If not, you can find it here:
http://www.LinuxDoc.org/  

> -Original Message-
> From: Fred Chavez [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, June 17, 2000 5:38 PM
> To: [EMAIL PROTECTED]
> Subject: raid 0, dev/hd? too small (0kb) error in RH 6.2
> 
> when i run mkraid /dev/md0, i get /dev/hd? too small (0kb) 
> error. i have two disks
> partitioned out. hda has the linux os/swap in hda1, 2, and 3. 
> hda4 is fd type and
> is about 4g in size. hdc has only 1 paritition (4g in size, 
> fd type ). do the disk
> capacities have to match or is it enough to create similar 
> sized fd partitions? one
> of my disks is 6g (hda) and the other is 4g (hdc).

What type of RAID volume are you trying to create?  What does your raidtab
look like?  Can you post the output of fdisk -l for those drives?
Greg



RE: thanks, apologia

2000-06-16 Thread Gregory Leblanc

> -Original Message-
> From: Jazalamadingdong Yo [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 16, 2000 9:23 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: thanks, apologia
> 
> thank you very much.  I was literally under the impression 
> that I was the 
> only person in the world who might be interested in running a 
> software-raid 
> on 2.2.16!

Of course not!  :-)  This is software under active development, and it's
also being worked into the 2.3/2.4 kernels.  

> Go out sometime as a raid-novice and see what's available out there 
> documentation-wise.  it's all OLD.  Even the 
> ftp.kernel.org/pub/linux/daemon/raid/alpha directory's last 
> update is in 
> august 99, for the 2.2.11 kernel.  appreciate the pointer to 
> the redhat 
> site.

Yeah, nobody is quite sure why Ingo isn't putting things on the kernel.org
sites, but it's his code, and his choice.  

> I'm sure this was a FAQ, but there's no archive, and no info 
> about the group 
> (that I could find looking at the usual suspects, the vger 
> http server, 
> google, linuxHQ, even deja, although this isn't a NG).  Where is the 
> archive/FAQ?

Uhm, not sure about the FAQ, although I've heard rumors of one.  Maybe
somebody else has a pointer.  As for the archive, http://Geocrawler.com
seems to have archives of LOTS of lists.  The one for linux-raid is at:
http://www.geocrawler.com/lists/3/Linux/57/0/.  The post I'm replying to was
at the top when I just looked, this one should be there by the time you hit
that page.  Later,
Greg



RE: Am I the only one?

2000-06-16 Thread Gregory Leblanc

> -Original Message-
> From: Jazalamadingdong Yo [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 16, 2000 8:49 PM
> To: [EMAIL PROTECTED]
> Subject: Am I the only one?
> 
> Finding info on the software RAID in linux has been a bitch!  
> Well, I guess 
> there is a lot out there, but it's all circa 1978.  Anyway, 
> wanting to know 
> if there's a RAID patch somewhere in the ether for 2.2.16.

If you read the mailing list that you just posted to, you'd already know the
answer to this question.  FYI, Ingo Molar has been posting patches to his
web space, at http://www.redhat.com/~mingo/.  There is also a Software RAID
HOWTO, written by Jakob O...something (sorry, way too long for me to
remember).  It's available from http://www.LinuxDoc.org/ or from his
homepage.  Later,
Greg



RE: Linux new style software RAID recovery procedures

2000-06-15 Thread Gregory Leblanc

> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 15, 2000 4:21 AM
> To: Linux-Raid
> Subject: Linux new style software RAID recovery procedures
> 
> H would like to elaborate on this in the FAQ, thought i'd better
> make sure I understand the process in case something happens.
> 
> 
> * read /var/log/messages
> * see drive has died
> * lookup scsi id -> drive name like sdb - scsi_info /dev/sdX is a good
> command for this
> * take system down [I know it's possible to hot swap it but 
> don't want to
> push luck]
> * replace failed drive with same size drive - set same SCSI 
> id's/jumpering
> * bring system back up
> * kernel/RAID should print something about drive not in volume
> * format new drive to ext2 standard - what if array is formatted to -b
> 4096 -R stride=8,
>   presumably the single drive should be also

You don't need to create a new filesystem on the drive, although you do need
to format it.  Almost all drives come pre-formatted from the factory.  Note
that "format" in DOS is creating a filesystem, format on a floppy in dos is
formatting, and creating a filesystem.  So, to clarify, you should only need
to partition the new drive as type fd, then raidhotadd.  I think.  :-)

> * partition new drive to type fd - linux raid autodetect
> * raidhotadd /dev/mdX /dev/sdX to re-insert the disk in the array
> 
> am I missing anything?
> 
> Maybe someone can clarify some of these points, but I think 
> that's how to do
> it.
Greg



RE: 2.2.16 RAID patch

2000-06-14 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 14, 2000 12:32 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: 2.2.16 RAID patch
> 
> > the latest 2.2 (production) RAID code against 2.2.16-final 
> can be found
> > at:
> > 
> > http://www.redhat.com/~mingo/raid-patches/raid-2.2.16-A0
> > 
> > let me know if you have any problems with it.
> > 
> 
> I hate to bother the list with this, but...I have been unable to get
> Redhat 6.1/2.2.16+raid-2.2.16-A0 working with Root RAID1. I 
> have read just
> about every version of the FAQ and Software RAID how-to's without any
> luck. I can get the said config to mount the RAID1 as another 
> file system.
> However, when trying to use the RAID1 as Root, it fails. Here is my
> raidtab, conf.modules, lilo.conf and the error messages in 
> /var/log/messages.
> 
> Any help would be greatly appreciated...(there is also the output of
> /proc/mdstat when the RAID1 is mounted as another file system)

It may be just me, but I've always found initrd to be FAR more of a hassle
than it's worth.  Have you tried compiling RAID into the kernel?  
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-14 Thread Gregory Leblanc

> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 1:26 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Gregory Leblanc wrote:
> 
> >>--snip--<<
> > > I conclude that on my system there is an ide saturation point (or
> > > bottleneck) around 40MB/s
> > Didn't the LAND5 people think that there was a bottleneck 
> around 40MB/Sec at
> > some point?  Anybody know if they were talking about IDE 
> drives?  Seems
> > quite possible that there aren't any single drives that are 
> hitting this
> > speed, so it's only showing up with RAID.
> > Greg
> 
> 
> Is there any place where benchmark results are listed?

Not that I know of.  Is there any interest in having these online in some
biggass database?  Assuming that I can manage it, I'll have a server online,
running some SQL server, by next weekend.  I could put things into there,
and provide some basic SQL type passthru from the web.

> I've finally
> gotten my RAID-1 running and am trying to see if the 
> performance is what
> I should expect or if there is some other issue:
> 
> Running "hdparm -t /dev/md0" a few times:
> 
>  Timing buffered disk reads:  64 MB in  3.03 seconds = 21.12 MB/sec
>  Timing buffered disk reads:  64 MB in  2.65 seconds = 24.15 MB/sec
>  Timing buffered disk reads:  64 MB in  3.21 seconds = 19.94 MB/sec

My understanding has always been that hdparm was on crack as far as speed
went.  I've never really taken the time to check, since tiobench does a
beautiful job for what I need, and because tiobench is CONSISTENT.

> And bonnie:
>   ---Sequential Output ---Sequential Input--
--Random--
>   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
> MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec
%CPU
>   800  5402 90.9 13735 13.7  7223 15.0  5502 85.0 14062  8.9 316.7
2.8
> 
> 
> I had expected better performance with the system: Adaptec 
> 2940U2W with
> 2x Seagate Cheetah (LVD) 9.1G drives; single PII 400Mhz; 
> 512MB ECC RAM;
> ASUS P3B-F 100Mhz.

I don't have anything that caliber to compare against, so I can't really
say.  Should I assume that you don't have Mika's RAID1 read balancing patch?

> I have to say the RAID-1 works very well in my crash tests, and that's
> the most important thing.

Yep!  Although speed is the biggest reason that I can see for using Software
RAID over hardware.  Next comes price.  
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Gregory Leblanc

> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 3:56 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Gregory Leblanc wrote:
> > 
> > I don't have anything that caliber to compare against, so I 
> can't really
> > say.  Should I assume that you don't have Mika's RAID1 read 
> balancing patch?
> 
> I have to admit I was ignorant of the patch (I had skimmed 
> the archives,
> but not well enough). Searched the archive further, found it, 
> patched it
> into 2.2.16-RAID.
> 
> However, how nervous should I be putting it on a production server?
> Mika's note says 'experimental'. This is my main production 
> server and I
> don't have a development machine currently capable of testing RAID1 on
> (and even then, the development machine can never get the 
> same drubbing
> as production). 

I've got it on the machines that I have running RAID in production.  I'm not
away of any "issues" with the patch, but I'm waiting for pre-releases of 2.4
to stabilize (on SPARC-32, mostly) before I start really reefing on things.
Ingo just posted something saying that the 2.4 code has Mika's patch
integrated, along with some cleanup.  Later,
Greg



RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Gregory Leblanc

> -Original Message-
> From: bug1 [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 10:39 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid0 performance, 1,2,3,4 drives
> 
> Ingo Molnar wrote:
> > 
> > could you send me your /etc/raidtab? I've tested the 
> performance of 4-disk
> > RAID0 on SCSI, and it scales perfectly here, as far as 
> hdparm -t goes.
> > (could you also send the 'hdparm -t /dev/md0' results, do you see a
> > degradation in those numbers as well?)
> > 
> > it could either be some special thing in your setup, or an IDE+RAID
> > performance problem.
> > 
> > Ingo
> 
> It think it might be an IDE bottleneck.
> 
> if i use dd to read 800MB from each of my drives individually 
> the speeds
> i get are
> 
> hde=22MB/s
> hdg=22MB/s
> hdi=18MB/s
> hdk=20MB/s
> 
> 
> if i do the same tests simultaneously i get 10MB/s from each 
> of the four
> drives
> if i do the same test on just hde hdg and hdk i get 13MB/s 
> from each of
> three drives
> if i do it on hde and hdg i get 18MB/s from each. (both ide 
> channels on
> one card
> On hdi and hdk i get 15MB/s
> 
> I conclude that on my system there is an ide saturation point (or
> bottleneck) around 40MB/s

Didn't the LAND5 people think that there was a bottleneck around 40MB/Sec at
some point?  Anybody know if they were talking about IDE drives?  Seems
quite possible that there aren't any single drives that are hitting this
speed, so it's only showing up with RAID.
Greg



RE: RAID0 problems

2000-06-12 Thread Gregory Leblanc

> -Original Message-
> From: Jordan Wilson [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 12, 2000 12:16 PM
> To: [EMAIL PROTECTED]
> Subject: RAID0 problems
> 
> I have a few problems regarding my software RAID0 solution.  
> I have two
> disks, hdb and hdd, on a raid0 array.  Everything was working 
> fine until I
> upgraded my kernel (from 2.2.12 to 2.2.16).  Yes, support for RAID is
> compiled in the kernel.  On bootup, I get :

Did you patch the kernel?  Below you say that you're using the 0.90 code,
which requires a patch to work with current kernels.  That should be
available from http://www.redhat.com/~mingo/
Greg



RE: bonnie++ for RAID5 performance statistics

2000-06-09 Thread Gregory Leblanc

> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 09, 2000 12:46 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: Re: bonnie++ for RAID5 performance statistics
> 
> 
> [Gregory Leblanc]
> > > [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5
> > > No size specified, using 200 MB
> > > Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec
> > 
> > Try making the size at least double that of ram.
> 
> Actually, I do exactly that, clamping at 200MB and 2000MB currently.
> Next ver will up it to 4xRAM but probably leave the clamps as is.
> (note: only clamps when size not specified... it always 
> trusts the user)

Sounds good, James, but Darren said that his machine had 256MB of ram.  I
wouldn't have mentioned it, except that it wasn't using enough, I think.  On
a side note, I think that 3x would be a better number than 4, but maybe it's
just me.  I've got multiple machines with 256MB of ram, but only 1GB or 2GB
RAID sets.  4x ram would overflow the smaller RAID sets.  Is anybody else
using RAID 1 to get more life out of a bunch of older 1GB disk?
Greg



RE: bonnie++ for RAID5 performance statistics

2000-06-08 Thread Gregory Leblanc

> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 08, 2000 2:16 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: RE: bonnie++ for RAID5 performance statistics
> 
> Hi Greg,
> 
> Yeah I know sorry about the mail line wrap thing I only
> noticed after I had sent the email.
> 
> 4 SCSI disks 40mb/s synchronous SCSI config, 2 Intel P500's 
> and 256mb RAM,
> Redhat 6.2, raid0145-19990824-2.2.11, raidtools-19990824-0.90.tar.gz
> and kernel 2.2.13 SMP.
> 
> [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5
> No size specified, using 200 MB
> Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

Try making the size at least double that of ram.  This helps to eliminate
the effects of caching to ram (I used to use 3x ram size, but my RAID sets
aren't big enough for that anymore).  The other thing to look at is the
number of runs.  It takes a fair bit of time to figure out what a reasonable
number is to ensure consistent results.  I've found that between 4 and 6
gets me stable numbers.

[snip]
> Options ...
> Run #1: ./tiotest -t 2 -f 100 -r 2000 -b 4096 -d /raid5 -T
> 
> Is that enough to go on? Thanks for the lead on tiobench.

Not sure what you're asking, can you elaborate?
Greg



RE: bonnie++ for RAID5 performance statistics

2000-06-08 Thread Gregory Leblanc

> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 07, 2000 3:02 AM
> To: [EMAIL PROTECTED]
> Subject: bonnie++ for RAID5 performance statistics
> 
> I guess this kind of thing would be great to be detailed in the FAQ.

Did you try reading the archives for this list, or the benchmarking HOWTO?

> Anyone care to swap statistics so I know how valid these are.
> 
> This is with an Adaptec AIC-7895 Ultra SCSI host adapter.
> 
> Is this good, reasonable or bad timing?

Impossible to tell, since we only know the adapter.  How many disks, what
sort of configuration, what processor/ram?  Without those, you can't even
guess at how the performance compares.  You should also check out tiobench
if you're doing multi-disk things, since it does a pretty darn good job of
threading, which takes better advantage of RAID.  tiobench.sourceforge.net,
I think.
One other thing, I find it easier to read things if your mail program
doesn't wrap lines like that.  If you can't modify it, attachments are good
for me.  Later!
Greg



RE: how should I set up swap?

2000-06-01 Thread Gregory Leblanc

> -Original Message-
> From: Luca Berra [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 01, 2000 11:24 AM
> To: [EMAIL PROTECTED]
> Subject: Re: how should I set up swap?
> 
> On Thu, Jun 01, 2000 at 04:26:24PM +0100, Corin Hartland-Swann wrote:
> > The solution I used (for maximum resilience) was to put a 
> swap /file/ on
> > the RAID-1 root (instead of a seperate partition). This 
> gets around the
> > problem of using swap during reconstruction, IIRC because swap works
> > slightly differently on a file rather than on a device. 
> There is a small
> > speed penalty, but much better resiliancy.
> 
> NO, it does not, the problem is still there whether you swap 
> on file or on
> partition, sorry

Can somebody explain what the problem is, and why it's a problem, and all
that Jazz, or point to the archives and give us a hint as to what the
subject line is so that we can find an archive of the lists and find it
ourself?
Greg



RE: RAID 1+0

2000-06-01 Thread Gregory Leblanc

> -Original Message-
> From: Neil Brown [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 01, 2000 4:29 AM
> To: Corin Hartland-Swann
> Cc: Theo Van Dinter; [EMAIL PROTECTED]
> Subject: Re: RAID 1+0
> 
> On Thursday June 1, [EMAIL PROTECTED] wrote:
> > 
> > Theo,
> > 
> > On Wed, 31 May 2000, Theo Van Dinter wrote:
> > > On Wed, May 31, 2000 at 09:10:30AM -0400, Andy Poling wrote:
> > > > That's the error you will get any time that you try to 
> layer raid levels
> > > > that md does not support layering.  It's a safety belt 
> mechanism of sorts.
> > > 
> > > Arguably, any combination should be allowed, but 0+1 and 
> 1+0 at minimum.
> > 
> > So, is 0+1 the only combination currently allowed?
> 
> Just to set the record straight, no layering of RAID arrays works with
> the 2.2patch set.
> 
> 0+1 (meaning a mirrored set of striped sets) appears to work, until a
> drive fails.
> On drive failure, the RAID system attempts to remove one of the
> underlying drives from the overlying mirrored set, fails to find it,
> and dies.
> An instance of this was reported on linux-raid a week ago or so.
> If you want to lookin an archive, the subject line was
>  "Disk failure->Error message indicates bug"

So just to confirm, RAID 0+1 is broken, although it appeared to work until
somebody uncovered a bug?  I didn't read that thread, or save it (drat!)

> > 
> > If so, is anybody working on allowing other combinations?
> > 
> > Is anybody else interested in seeing 1+0, 5+0, etc?
> 
> 2.4, when if comes out, should be able to support all combinations,
> both raid with raid and raid with lvm.  But it doesn't yet (I have a
> patch that I am working on).

Any idea how much hacking time still needs to go into that?  I was planning
to rebuild my system with some combination of RAID 1 and 0 fairly soon, I'd
like to know if I should shelf those plans...
Greg



RE: how should I set up swap?

2000-06-01 Thread Gregory Leblanc

> -Original Message-
> From: Gavin Clark [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 31, 2000 6:35 PM
> To: '[EMAIL PROTECTED]' 
> Subject: how should I set up swap?
> 
> Hi,
> 
> got raid working, got lilo all set up, now how should I set up swap in
> /etc/fstab?
> 
> I have 2 scsi drives set up as a raid1- this has / and /boot 
> and the rest of
> the system. 
> 
> I also have an IDE drive for storage and backup.
> 
> all 3 drives have a partition set up to be swap
> 
> Right now swap is on the IDE drive.
> 
> I guess I have a few choices about how to set this up.
> 1) leave swap as /dev/hda2
> 2) move swap to /dev/sda2 or /dev/sdb2
> 3) join sda2 and sdb2 as md2 and put swap on the raid1
> 4) some combination of the above
> 
> I've been reading the archive and I can't get a clear picture 
> as to which
> way to go.
> 
> Mostly I'm looking for ressiliancy against drive failure.

If your system never uses swap, then it's not a big deal, just configure two
swap devices with the same priority (I've got 256MB of ram on my desktop, I
only use swap when I've got 2+ vmware machines running).  If you want to
protect against failure of the disk(s) that swap is stored on, create a
RAID1 for swap, and use the script that was just recently posted in place of
swapon -a, because you can't have swap while a RAID set is reconstructing.
Greg



RE: Installing another SCSI controller

2000-05-30 Thread Gregory Leblanc

> -Original Message-
> From: Dave Wreski [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 30, 2000 9:41 PM
> To: [EMAIL PROTECTED]
> Subject: Installing another SCSI controller
> 
> Hi all.  I've got raid0 working fine between three 9G scsi disks.  I'd
> like to add an aha152x to drive a CDRW.  However, support for it is
> compiled directly in the kernel, and it the driver gets 
> loaded before the
> aha294x driver controlling the hard disks.
> 
> I knew it would be too easy to simply adjust the raidtab to 
> reflect the
> changed device names.  How can I go about updating the raid0 
> support to
> reflect the changed device names?  Is there a way to fix the 
> hard disks at
> their current working position to avoid this problem?

Assuming that you're using persistent superblocks, you don't have to do
anything to make it work.  Why would adding a CD-RW drive change the drive
letters for the disks, shouldn't it be a srX, or scdX?  Assuming that your
CD-RW is relatively new, you may actually get better performance from
putting it on the same bus as those other SCSI disks, depending on what
you're doing.  The aha152x was a pretty crappy ISA controller.  Even my
Mylex VLB controler outperforms it by a fair margin.  
Greg



RE: Any distro with automated raid setup?

2000-05-29 Thread Gregory Leblanc

Try RedHat 6.2 or 6.1.  I've installed both onto a computer with dual 2GB
SCSI drives in a RAID1.  I wouldn't use anything else for /boot (assuming
that's where you keep everything boot-related).  I never use a RAM disk for
essential things like SCSI drivers and RAID code, so that's not an issue for
me.  You could even make /boot into a 3disk RAID1, and everything else RAID
5, if you needed the space.
Greg

> -Original Message-
> From: Slip [mailto:[EMAIL PROTECTED]]
> Sent: Monday, May 29, 2000 1:48 PM
> To: [EMAIL PROTECTED]
> Subject: Any distro with automated raid setup?
> 
> 
> Hi there,
> I'm wondering if anyone has run into a distribution of 
> linux that has software raid-util's pre-packaged into it, or 
> available in a third party package. I'v been trying to setup 
> software raid with three 2.1G SCSI drives for quite a while 
> now and am simply looking for an easier sollution. Any 
> pointers/suggestions?
> 
> Thanks!
> -Jamie
> 



RE: HELP with autodetection on booting

2000-05-29 Thread Gregory Leblanc

I started seeing this when I blew away my RAID0 arrays and put RAID1 arrays
on my home machine.  I suspect that this is cause by RedHat putting
something in the initscripts to start the RAID arrays AND the RAID slices
being set to type fd (RAID autodetect), but I haven't been able to confirm
this.  And since I just totaled my RH install, it may be a couple of weeks
before I get back to look some more.  
Greg

> -Original Message-
> From: Jieming Wang [mailto:[EMAIL PROTECTED]]
> Sent: Monday, May 29, 2000 6:31 AM
> To: [EMAIL PROTECTED]
> Subject: HELP with autodetection on booting
> 
> 
> I am running Redhat 6.0 with kernel 2.2.5-22. I have 
> successfully created a 
> RAID1 disk and mount with no problem.  However, when I reboot 
> the machine, 
> it failed. Below is part of the message from running command dmesg:
> 
> autorun ...
> ... autorun DONE.
> VFS: Mounted root (ext2 filesystem).
> (scsi0)  
> found at PCI 11/0
> (scsi0) Wide Channel, SCSI ID=7, 32/255 SCBs
> (scsi0) Downloading sequencer code... 374 instructions downloaded
> scsi0 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 
> 5.1.16/3.2.4
>
> scsi : 1 host.
> (scsi0:0:5:0) Synchronous at 80.0 Mbyte/sec, offset 15.
>   Vendor: IBM   Model: DDRS-34560D   Rev: DC1B
>   Type:   Direct-Access  ANSI SCSI revision: 02
> Detected scsi disk sda at scsi0, channel 0, id 5, lun 0
> (scsi0:0:6:0) Synchronous at 80.0 Mbyte/sec, offset 15.
>   Vendor: IBM   Model: DDRS-34560D   Rev: DC1B
>   Type:   Direct-Access  ANSI SCSI revision: 02
> Detected scsi disk sdb at scsi0, channel 0, id 6, lun 0
> SCSI device sda: hdwr sector= 512 bytes. Sectors= 8925000 
> [4357 MB] [4.4 
> GB]
>  sda: sda1 sda2 sda3 sda4 < sda5 sda6 >
> SCSI device sdb: hdwr sector= 512 bytes. Sectors= 8925000 
> [4357 MB] [4.4 
> GB]
>  sdb: sdb1 sdb2 sdb3 sdb4 < sdb5 sdb6 >
> autodetecting RAID arrays
> (read) sda1's sb offset: 1028032 [events: 000a]
> (read) sdb1's sb offset: 1028032 [events: 000a]
> autorun ...
> considering sdb1 ...
>   adding sdb1 ...
>   adding sda1 ...
> created md0
> bind
> bind
> running: 
> now!
> sdb1's event counter: 000a
> sda1's event counter: 000a
> kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2
> do_md_run() returned -22
> unbind
> export_rdev(sdb1)
> unbind
> export_rdev(sda1)
> md0 stopped.
> ... autorun DONE.
> VFS: Mounted root (ext2 filesystem) readonly.
> change_root: old root has d_count=1
> 
> JW. 
> 



RE: Problems creating RAID-1 on Linux 2.2.15/Sparc64

2000-05-28 Thread Gregory Leblanc

Why didn't this get integrated generically into the 2.2.15 patch?  It's been
a known "feature" for a while.
Greg

> -Original Message-
> From: Ion Badulescu [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, May 28, 2000 3:18 PM
> To: Gustav
> Cc: [EMAIL PROTECTED]
> Subject: Re: Problems creating RAID-1 on Linux 2.2.15/Sparc64
> 
> In article 
>  you wrote:
> 
> > I am having trouble using Linux RAID on a Sun Ultra1 running
> > 2.2.15.
> 
> You need an additional patch, just plain vanilla 2.2.15 + 
> raid-0.90 won't
> do on a sparc. Red Hat have it in their 2.2.14-12 source rpm, but I'm
> attaching it here, for convenience.
> 
> Ion
> 
[snip]



RE: disk partiton type

2000-05-26 Thread Gregory Leblanc

> -Original Message-
> From: Jieming Wang [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 26, 2000 9:37 AM
> To: [EMAIL PROTECTED]
> Subject: disk partiton type
> 
> Hello there,
> 
> I have 2 SCSI disks with the following configuration (running 
> Redhat 6.0 with kernel 2.2.5-15):
> 
> /etc/raidtab:
> 
> raiddev /dev/md0
> raid-level1
> nr-raid-disks 2
> nr-spare-disks0
> chunk-size4
> 
> device/dev/sda1
> raid-disk 0
> device/dev/sdb1
> raid-disk 1
> 
> When I run command mkraid /dev/md0, I receive the following errors:
> 
> disk 0: /dev/sda1, 1028128kB, raid supperblock at 1028032kB
> /dev/sda1 appears to contain an ext2 filesystem -- use -f to override
> mkraid: abort.
> 
> I also tried -f option and it didn't work either.
> 
> So it seems that the disk partition type (ext2)  was wrong?
> 
> Any suggestions will be appreciated.

Try overwriting some of the data on /dev/sda1, maybe with something like 'dd
if=/dev/zero of=/dev/sda1' for a few seconds (30 should be more than
sufficient).  Doing that WILL destroy the data on /dev/sda1, assuming that's
what you want to do.  After you do that, mkraid shouldn't see the ext2
filesystem, and should blow away the data on those drives, letting you
create the RAID.  Check out the Software-RAID-HOWTO from
http://www.linuxdoc.org/
Greg



RE: 2.2.15 with raid-0.9

2000-05-24 Thread Gregory Leblanc

Yes, Ingo Molar has a patch for 2.2.15 at http://www.redhat.com/~mingo/
Greg

> -Original Message-
> From: Sangohn Christian [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 24, 2000 5:14 AM
> To: [EMAIL PROTECTED]
> Subject: 2.2.15 with raid-0.9
> 
> 
> Do I have to patch the newer kernel 2.2.15 to use the the new RAID
> support?
> 



RE: Archives

2000-05-23 Thread Gregory Leblanc

Two places, one posted just a little while ago.

http://www.progressive-comp.com/Lists/
http://kernelnotes.org/lnxlists/linux-raid/ 
Greg


> -Original Message-
> From: Ron Brinker [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 23, 2000 4:45 AM
> To: [EMAIL PROTECTED]
> Subject: Archives
> 
> 
> Does anyone know if an archive of this mailing list exists, 
> and what the 
> URL is if it does?
> 
> Thanks,
> 
> Ron
> 



RE: ICP vortex vs. mylex

2000-05-18 Thread Gregory Leblanc

> -Original Message-
> From: Adrian Head [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 18, 2000 10:00 PM
> To: [EMAIL PROTECTED]
> Subject: RE: ICP vortex vs. mylex
> 
> 
> > -Original Message-
> > From:   Christian Robottom Reis [SMTP:[EMAIL PROTECTED]]
> > Sent:   Thursday, May 18, 2000 8:20 AM
> > To: Thomas King
> > Cc: [EMAIL PROTECTED]
> > Subject:Re: ICP vortex vs. mylex
> > 
>   [Adrian Head]  [SNIP]
> >  
> > If you want to know what slow means, I'll post some SW-Raid
> > readbalanced
> > RAID1 benchmarks *grin* Mika rules!
> > 
>   [Adrian Head]  I haven't heard of the readbalanced patches
> before - where can I find info about them and where can I get them to
> try?

http://www.icon.fi/~mak/  That's Miko's homepage, with pointers to Tiotest
and the RAID1 read balancing patch.  It's pretty slick.
Greg



RE: Is there a size limit in linear mode?

2000-05-18 Thread Gregory Leblanc

> -Original Message-
> From: Christopher "C.J." Keist [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 18, 2000 8:38 AM
> To: [EMAIL PROTECTED]
> Subject: Is there a size limit in linear mode?
> 
> Hello,
> Looking for some help/advice.  I'm trying to append three logical
> drives defined on a HP NetRAID 3si Raid controller.  The sizes of the
> three logical drives are :363Gb, 101Gb and 34Gb.  I have read 
> through the
> Software-RAID.HOWTO.txt doc and have setup the kernel and 
> raidtab file as
> specified in the doc (I'm running RH6.2) plus set the fd for the
> partition types. Anyway when I run the mkraid
> command this is what I get:
> 
> [root@megatera /etc]# mkraid /dev/md0
> handling MD device /dev/md0
> analyzing super-block
> disk 0: /dev/sdc1, 373414828kB, raid superblock at 373414720kB
> disk 1: /dev/sdd1, 106687633kB, raid superblock at 106687552kB
> disk 2: /dev/sde1, 35559846kB, raid superblock at 35559744kB
> /dev/md0: Invalid argument
> 
> Here is what the /proc/mdstat file says:
> 
> [root@megatera raidtools-0.90]# more /proc/mdstat
> Personalities : [linear] 
> read_ahead not set
> unused devices: 
> 
> Have I hit a size limit on what the raidtools/kernel can handle?

I doubt it, but you never know.  Does /dev/md0 exist?  
Greg



RE: help interpret tiobench.pl results?

2000-05-17 Thread Gregory Leblanc

> -Original Message-
> From: Edward Schernau [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 17, 2000 11:19 AM
> To: [EMAIL PROTECTED]
> Subject: help interpret tiobench.pl results?
> 
> I get:
> 
>  File   Block  Num  Seq ReadRand Read   Seq Write  Rand
> Write
>   DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
> (CPU%)
> --- -- --- --- --- --- ---
> ---
>. 20040961  21.57 12.0% 0.634 1.13% 19.67 24.0% 1.080
> 1.86%
>. 20040962  16.24 10.1% 0.646 0.84% 19.86 35.7% 1.128
> 2.67%
>. 20040964  15.35 9.90% 0.652 0.83% 19.69 36.8% 1.123
> 2.80%
>. 20040968  14.82 9.93% 0.671 0.82% 19.56 38.0% 1.126
> 2.92%
> 
> The machine only has 64MB of RAM, and it was in X, with Netscape
> running,
> so very little memory was free.  Seems ok, but the Rand tests seem
> pretty pitiful. 

I think that's the point.  Random tests aren't fast, because there's a lot
of seek, read, seek, read overhead.  Random reads can't be fast on disks
because of all of the movement.  Solid state devices, on the other hand...

> What does this tell me, exactly, esp. as the threads
> increase?  And why does Seq. Read drop off, but Seq. Write doesnt?

The sequential reads probably drop off with multiple threads (assuming that
this is a single disk) because it's having to seek between reads.  I'm not
sure about the writes.  Of course I can't say with 100% certainty, nor can
anybody else, although we can all make good educated guesses.  
Greg



RE: md0 won't let go... (dmesg dump...)

2000-05-17 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 17, 2000 9:10 AM
> To: m. allan noah; James Manning
> Cc: [EMAIL PROTECTED]
> Subject: Re: md0 won't let go... (dmesg dump...)
> 
> on 5/17/00 8:30 AM, m. allan noah at [EMAIL PROTECTED] wrote:
> 
[snip]
> 
> While I appreciate the patch/diff provided by James Manning, 
> I am extremely
> weary of applying anything to a system that I don't fully understand -
> particularly if it is suffixed by "Who knows..." (shiver).
> 
> Now, I just need to make sure all devices are attached as 
> Master devices, on
> their own controller port, and then figure out what minor and 
> major to set
> them at... *ANY* help in allowing me to better understand how 
> that's done,
> or in actually doing this will be appreciated.
> 
> > you MUST make the device
> > files in /dev/ in order for the kernel to know what devices 
> you are trying to
> > access (ok- that is oversimplified to the point of being 
> almost incorrect)
> 
> Alright, maybe it's oversimplified, but I grok that part 
> (that the kernel
> needs the proper device files, and that I don't have the 
> device files, and
> thus need to create them.
>  
> > until you use mknod to create these device files, you will 
> NOT be able to open
> > the drives, or do anything with them with ANY tool in 
> linux. the only thing
> > that will be able to see them is the kernel at boot time. 
> hence- your problem.
> 
> Thanks, and thanks to James Manning as well for finally 
> tracking down what
> the core of this problem is.
> 
> Is there some utility that will quickly and easily create 
> /dev/ files and
> provides qualified questions to assist in properly creating 
> /dev/ files?

Yeah, it's the utility "MAKEDEV", which probably can't create those entries
without the patch that James provided 8^).  You may, if you're brave, want
to take a look at the MAKEDEV script, and see if you can find anything on
'/dev/hdl' or 'hdl'.  If there isn't anything there, then you'll have to add
something to the script that tells it how to create those devices.  If you
look in devices.txt again, you will see that hdl requires a major device
number of something like (I'm making this up, CHECK IT!) 57.  Minor numbers
will vary with the entry that you are creating.  Since this is WAY off topic
for RAID, you can email me privately if you need help creating the
appropraite entries for these devices.  I learned about that the fun way
when I built a machine with 28 CD-ROM drives (what does NT do when it gets
beyond 26 drives?).  
Greg



RE: How to test raid5 performance best ?

2000-05-15 Thread Gregory Leblanc

> -Original Message-
> From: octave klaba [mailto:[EMAIL PROTECTED]]
> Sent: Monday, May 15, 2000 7:25 AM
> To: Thomas Scholten
> Cc: Linux Raid Mailingliste
> Subject: Re: How to test raid5 performance best ?
> 
> > 1. Which tools should i use to test raid-performace ?
> tiotest.
> I lost the official url
> you can download it from http://ftp.ovh.net/tiotest-0.25.tar.gz

Try http://tiobench.sourceforge.net.  That's a pretty old version, there
have been a number of improvements.  

> > 2. is it possible to add disks to a raid5 after its been started ?
> good question ;)

I thought there was something to do this, but I'm not sure.  I'd think that
LVM would be able to make this workable more than just filesystems on disks,
but I'm not sure.  
Grego



RE: md0 won't let go...

2000-05-10 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 10, 2000 4:20 PM
> To: Gregory Leblanc; [EMAIL PROTECTED]
> Subject: Re: md0 won't let go...
> 
> 
> on 5/10/00 3:59 PM, Gregory Leblanc at [EMAIL PROTECTED] wrote:
> 
> > ok, stupid questions.  Try 'umount /dev/md0' then 'raidstop 
> /dev/md0' and
> > then 'fdisk /dev/hdX'.
> 
> [root@gate /root]# umount /dev/md0
> [root@gate /root]# raidstop /dev/md0
> [root@gate /root]# fdisk /dev/hdl
> 
> Unable to open /dev/hdl
> 
> No difference.

FUDGE.  :-)

> 
> > If that doesn't work, from lilo do a 'linux single'
> > and then the above commands.
> 
> You mean, reboot and at the lilo prompt type in 'linux single'?

Yep, that's what I mean.  
Greg



RE: md0 won't let go...

2000-05-10 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 10, 2000 3:46 PM
> To: James Manning
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: md0 won't let go...
> 
> on 5/10/00 3:32 PM, James Manning at [EMAIL PROTECTED] wrote:
> 
> > Are you claiming that /proc/mdstat has the md0 active both 
> before and
> > after running raidstop /dev/md0?  Just want to clarify.
> 
> [root@gate Backup]# raidstart -a
> /dev/md0: File exists
> 
> [root@gate Backup]# raidstop /dev/md0
> /dev/md0: Device or resource busy
> 
> (This is normal, the fs is shared by atalk. I disable atalk)
> 
> [root@gate Backup]# raidstop /dev/md0
> /dev/md0: Device or resource busy
> 
> (Now this is no longer normal. No services or anything else 
> is using the
> partition. I made sure no one is logged in to that partition. 
> Still, the
> same error.)

ok, stupid questions.  Try 'umount /dev/md0' then 'raidstop /dev/md0' and
then 'fdisk /dev/hdX'.  If that doesn't work, from lilo do a 'linux single'
and then the above commands. 
Grego



RE: md0 won't let go...

2000-05-10 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 10, 2000 3:27 PM
> To: Gregory Leblanc; [EMAIL PROTECTED]
> Subject: Re: md0 won't let go...
> 
> on 5/10/00 3:00 PM, Gregory Leblanc at [EMAIL PROTECTED] wrote:
> 
> > D'oh!  Are you sure that you have the device entries correct?
> 
> Yes, I have checked that multiple times. In fact, just to be 
> sure I also
> inspected it using webmin, which has as a nice feature to ONLY show
> available drives in its fdisk screen.
> 
> All three drives show up, without available partitions, but 
> when I bring up
> the details screen, it says VERY clearly:
> 
> Part of RAID device /dev/md0
> 
> as the reason why the drive can't be used with fdisk.

Are you sure that the RAID devices are stopped when you're running this
command?  When all else fails, grab a boot floppy, and get a kernel without
RAID support compiled in, and run fdisk while running from the non-RAID
capable floppy.  Changing the partition type from fd to something else
should prevent those disks from getting included in any RAID autostart.
Greg



RE: md0 won't let go...

2000-05-10 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 10, 2000 2:56 PM
> To: Gregory Leblanc; [EMAIL PROTECTED]
> Subject: Re: md0 won't let go...
> 
> 
> on 5/10/00 2:08 PM, Gregory Leblanc at [EMAIL PROTECTED] wrote:
> 
> > That is a correct error message, and has nothing to do with 
> RAID.  You can't
> > run fdisk on /dev/hdx1, you have to run fdisk on /dev/hdx
> 
> Sorry, my bad in transcribing the error message.
> 
> I tried to fdisk /dev/hdx
> 
> And the error I received was:
> 
> Unable to open /dev/hdl

D'oh!  Are you sure that you have the device entries correct?  The only
times that I've had this error message from fdisk, I've been trying to
partition a drive that wasn't really there (like /dev/hdq), or when I'd done
something like 'fdisk /dev/sda3' which also doesn't work,
Greg



RE: md0 won't let go...

2000-05-10 Thread Gregory Leblanc

> -Original Message-
> From: Harry Zink [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 10, 2000 2:00 PM
> To: [EMAIL PROTECTED]
> Subject: md0 won't let go...
> 
> Problem: I compiled RAID patches properly into the recent 
> 2.2.15 kernel. I
> have TWO IDE controllers (Promise & HPT366). 2 drives 
> attached to the HPT366
> (10.1 gb IBMs), 3 drives attached to the Promise (3 x IBM 25.1 gb).
> 
> I successfully created /dev/md0 with the 2 x 10.1 gb drives.
> 
> When ataching the 3x25 gb drives to the Promise controller, 
> and restarting
> the system, everytime, md wants to take possession of these 
> drives, thus
> making it impossible for me to use fdisk to actually format 
> them properly
> (they are unformatted).
> 
> When trying to use fdisk, I get 'unable to open /dev/hdx1'.

That is a correct error message, and has nothing to do with RAID.  You can't
run fdisk on /dev/hdx1, you have to run fdisk on /dev/hdx.  Run fdisk on
these drives, and change partition types from FD (Linux RAID), to whatever
type of partition you want them to be.  Basically, I think you were just
pointing fdisk to the wrong place.

> By checking the syslog, it claims that these new drives are part of
> /dev/md0.
> 
> My raidtab definition doesn't even mention these drives (hdj, 
> hdk, hdl), and
> ONLY has entries for the existing raided drives (hdg1, hdh1).
> 
> Regardless what I do, I can't 'disconnect' these drives, or 
> disable the RAID
> mechanism at all. Any help with this will be appreciated.
> 
> Thanks.

Good luck,  
Greg



RE: RAID and new kernels, FYI

2000-05-06 Thread Gregory Leblanc

RedHat has a lot of patched in the kernel already.  Take a look at the
kernel source RPMs.
Greg

> -Original Message-
> From: Edward Schernau [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, May 06, 2000 12:20 PM
> To: [EMAIL PROTECTED]
> Subject: RAID and new kernels, FYI
> 
> 
> The RAID patches will NOT patch cleanly (nor will much else)
> on a Redhat-supplied kernel.  Make sure you start with a
> fresh kernel.org tarball.
> 
> Ed
> 



RE: Software-RAID and new Kernel -> Patch?

2000-05-06 Thread Gregory Leblanc

First, check out the Software-RAID-HOWTO at http://www.LinuxDoc.org/  

> -Original Message-
> From: Andreas ~ [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, May 06, 2000 4:53 AM
> To: [EMAIL PROTECTED]
> Subject: Software-RAID and new Kernel -> Patch?
> 
> Sorry if this question has already been answered in the past - I'm
> relative new to this list and to Linux...
> 
> I'm using software-RAID Level 0 with kernel 2.2.11 and the correspon-
> ding raidpatch (raid0145-19990824-2.2.11) and the newest raid-
> tools 0.90 (raidtools-19990824-0.90.tar).
> It seems that kernelpatches for newer kernels aren't available. Is
> this raidpatch already included in the newer kernels (I need RAID-0
> and the autodetection feature (partition ID 'fd')) or do I have to
> stay with 2.2.11?

The 2.2.11 patch will work with kernels up to 2.2.13.  There is a patch for
2.2.14 that applies cleanly, and can be found at
http://www.redhat.com/~mingo/  This patch will also apply to the 2.2.15
kernel, but I believe that it will have 1 reject that you'll have to correct
manually.  Note that I haven't actually tried that one, but the author of
the HOWTO has.
Greg



RE: IDE Controllers

2000-05-05 Thread Gregory Leblanc

> -Original Message-
> From: Andre Hedrick [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 05, 2000 7:59 PM
> To: Gary E. Miller
> Cc: Linux Kernel; Linux RAID
> Subject: Re: IDE Controllers
> 
> What you do not know is that there will be a drive int the futre that
> will have a native SCSI overlay and front end.  This will 
> have a SCB->ATA
> converter/emulation.  This will require setup and booting as a SCSI
> device.  FUN, heh??

Bleah, why?  I haven't figured why there are all those IDE-SCSI hiding
things yet.  The history of Linux seems to point towards the IDE support
being better than the SCSI, and yet the CD-R/W devices work through the SCSI
interface, and it looks like now the disks will.  Obviousally, I don't keep
up on all of the kernel developments, I've still got a full time job to keep
track of, but I'm still interested.
Greg

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: performance limitations of linux raid

2000-05-05 Thread Gregory Leblanc

> -Original Message-
> From: Michael Robinton [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 10:31 PM
> To: Christopher E. Brown
> Cc: Chris Mauritz; bug1; [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
> 
> On Thu, 4 May 2000, Christopher E. Brown wrote:
> 
> > On Wed, 3 May 2000, Michael Robinton wrote:
> > 
> > > The primary limitation is probably the rotational speed 
> of the disks and 
> > > how fast you can rip data off the drives. For instance, 
> the big IBM 
> > > drives (20 - 40 gigs) have a limitation of about 27mbs 
> for both the 7200 
> > > and 10k rpm models. The Drives to come will have to make 
> trade-offs 
> > > between density and speed, as the technology's in the 
> works have upper 
> > > constraints on one or the other. So... given enough 
> controllers (either 
> > > scsii on disk or ide individual) the limit will be related to the 
> > > bandwidth of the disk interface rather than the speed of 
> the processor 
> > > it's talking too.
> > 
> > Not entirely, there is a fair bit more CPU overhead running an
> > IDE bus than a proper SCSI one.
> 
> A "fair" bit on a 500mhz+ processor is really negligible.

Not if you've got 12 IDE channels, with 1 drive each in a couple of big RAID
arrays.  Even if all of those were mirrors (since that takes the least host
CPU RAID wise), that would suck up a lot more host CPU processing power than
the 3 SCSI channels that you'd need to get 12 drives and avoid bus
saturation.  
Greg



RE: performance limitations of linux raid

2000-05-04 Thread Gregory Leblanc

> -Original Message-
> From: Carruth, Rusty [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 8:36 AM
> To: [EMAIL PROTECTED]
> Subject: RE: performance limitations of linux raid
> 
> > The primary limitation is probably the rotational speed of 
> the disks and 
> > how fast you can rip data off the drives. For instance, ...
> 
> Well, yeah, and so whatever happened to optical scsi?  I 
> heard that you
> could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 
> 1000 meters -
> or is this not coming down the pike?

1 Gbit/sec is approximately equal to 120Mbytes/sec.  Ultra160 SCSI is faster
than that.  

> (optical scsi - meaning using fiber instead of ribbon cable 
> to interconnect
> controller to drive)

Fiber Channel, or SCSI over Fiber Channel?  Sun has an FC array that allows
you to have 500metres between the server and the array, at 100MegaBytes/sec
data transfer.  I don't know what they're going to do with FC, but it's been
outpaced for speed by SCSI.  
Greg



RE: Raid 0.90 patch against 2.2.15

2000-05-04 Thread Gregory Leblanc

I haven't looked at 2.2.15, but the patch for 2.2.14 is at
http://www.redhat.com/~mingo/
Greg

> -Original Message-
> From: A James Lewis [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 7:54 AM
> To: [EMAIL PROTECTED]
> Subject: Raid 0.90 patch against 2.2.15
> 
> 
> 
> Hi all,
> 
> I know it's a pretty tall order since most of the core 
> development work is
> against the 2.3.x kernel. BUT
> 
> Has anyone got a working patch against 2.2.15 or even 2.2.14?
> 
> 
> A. James Lewis ([EMAIL PROTECTED])
> - Linux is swift and powerful.  Beware its wrath...
> 



RE: Fastest / Most stable way to get >2GB files in 2.2?

2000-05-01 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Monday, May 01, 2000 11:07 AM
> To: [EMAIL PROTECTED]
> Subject: Fastest / Most stable way to get >2GB files in 2.2?
> 
> I have a MySQL database, running on SW RAID-0 over ext2, that wants to
> grow beyond the 2GB file size limit of ext2.  I will be moving the
> database this week to a four-way SMP system with six drives, so I want
> to take this opportunity to move to a file system that will support
> larger file sizes.

ext2 does not have a 2GB filesize limit.  ext2 running on the x86 platform
(not sure about other 32-bit word lenght platforms) has a 2GB filesize
limit.  Get an UltraSPARC, or an Alpha, they've got insane filesizes using
ext2.  

> Does anyone here have experience with ReiserFS or ext3 under software
> RAID?  What is the simplest option?  Should I consider a development
> kernel?  What is the most robust option?  (I will be moving to RAID-5
> in the process, but I don't want to shift more responsibility onto
> RAID error correction than I have to.)

IMHO, the journaling filesystems have at least 6+ months of good solid
development left before they'll go on any of my machines.  I do have 1 ext3
machine that I play with, but I crash that a lot.  Not sure who is at fault
there.  :-)
Greg



RE: lilo: Sorry, don't know how to handle device 0x0905

2000-04-30 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, April 30, 2000 4:57 PM
> To: Martin Munt
> Cc: [EMAIL PROTECTED]
> Subject: Re: lilo: Sorry, don't know how to handle device 0x0905
> 
[snip]
> > What next? I'm booting from floppy now :(
> 
> That's what I do! :')  Floppy isn't a horrible way to go.  Just make
> an extra boot floppy or two, then you'll be fairly safe.  I don't
> think I've ever had a floppy drive 'die', but I have had disks die... 

That's just evil.  :-)  Floppy drives are relatively hard to kill, out of 50
machines in the labs, I replace about 1 or 2 drives a semester.  I replace
about 1 HD every 3 semesters.  However, floppy diskettes are INCREDIBLY
fragile.  If you set it on top of your speakers, or near a cell phone, or
even near a monitor, you could end up with all your bits in a bind.  I
usually keep 2 boot floppies around, but only because I like to have
something there in case my boot CDs or drives die.  And those are only for
when my HD dies to the point that I can't boot from it, which is pretty
rare.  Stay away from booting from floppy except in an emergency, they are
NOT reliable.  
Greg



RE: performance limitations of linux raid

2000-04-25 Thread Gregory Leblanc

> -Original Message-
> From: Daniel Roesen [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 25, 2000 3:07 PM
> To: [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
> 
> 
> On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
> > Clue: the Promise IDE RAID controller is NOT a hardware RAID
> > controller.
> > 
> > Promise IDE RAID == Software RAID where the software is written by
> > Promise and sitting on the ROM on the Promise card getting called by
> > the BIOS.
> 
> Clue: this is the way every RAID controller I know of works 
> these days.

Then you've never used a RAID card.  I've got a number of RAID cards here, 2
from compaq, 1 from DPT, and another from HP (really AMI), and all of them
implement RAID functions like striping, double writes (mirroring), and
parity calculations for RAID4/5 in firmware, using an onboard CPU.  All the
controllers here are i960 based, but I've heard that the StrongARM procs are
much faster at parity caclculations.  The controllers that I've used that
are software are the Adaptec AAA series boards.  The other one that I know
of is this Promise thing.
Greg



RE: stability of 0.90

2000-04-25 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 25, 2000 10:24 AM
> To: [EMAIL PROTECTED]
> Subject: stability of 0.90
> 
> 
> I've been running raid1 (kernel 2.0, then 2.2) on a 
> fileserver for over a
> year now. I have suddenly seen the need to upgrade to 
> raid0.90 after having
> a powerfailure+UPS failure; I _need_ hot recovery (12GB takes 
> about 2hrs to
> recover with the current code!). How stable is 0.90? Under

I've had no trouble with it, running a strip set (RAID 0) for about 4 months
now.  

> , the file is labeled "dangerous". 
> But I can't use
> the 2.2.11 code under kernel.org 'cause 2.2.11 has that nasty 
> little TCP
> memory leak bug

All the RAID code is "dangerous" even the old 0.40 stuff.  The 2.2.11 patch
works all the way up to 2.2.13, for 2.2.14 you need Ingo's patch from the
above site.  RAIDtools-0.90 is the version you want.
Greg



RE: performance limitations of linux raid

2000-04-24 Thread Gregory Leblanc

> -Original Message-
> From: Scott M. Ransom [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 24, 2000 6:13 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Gregory Leblanc; bug1
> Subject: RE: performance limitations of linux raid
> 
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
> 
> >> There's "specs" and then there's real life.  I have never 
> >> seen a hard drive
> >> that could do this.  I've got brand new IBM 7200rpm ATA66 
> >> drives and I can't
> >> seem to get them to do much better than 6-7mb/sec with 
> either Win98,
> >> Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, 
> >> and Supermicro
> >> PIIIDME boards.  And yes, I'm using an 80 conductor cable. 
>  I'm using
> >> Wintune on the windows platforms and bonnie on Linux to do 
> benchmarks.
> >
> > I don't believe the specs either, because they are for the 
> "ideal" case.
> 
> Believe it.  I was getting about 45MB/s writes and 14 MB/s reads using
> RAID0 with the 2.3.99pre kernels on a Dual PII 450 with two 30G
> DiamondMax (7200rpm Maxtor) ATA-66 drives connected to a 
> Promise Ultra66
> controller. 
> 
> Then I moved back to kernel 2.2.15-pre18 with the RAID and IDE patches
> and here are my results:
> 
>   RAID0 on Promise Card 2.2.15-pre18 (1200MB test)
> --
>  ---Sequential Output ---Sequential Input-- --Random--
>  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
>  K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
>   6833 99.2 42532 44.4 18397 42.2  7227 98.3 47754 33.0 182.8  1.5
> **
> 
> When doing _actual_ work (I/O bound reads on huge data sets), I often
> see sustained read performance as high as 50MB/s.
> 
> Tests on the individual drives show 28+ MB/s.

Sounds dang good, but I don't have any of those yet...  When I can get a
1350 MHz proc, I'll grab a new machine an correlate these results for
myself.  :-)

> 
> The performance is simply amazing -- even during real work (at least
> mine -- YMMV).  And best of all, the whole set-up (Promise card + 2X
> Maxtor drives only cost me $550)
> 
> I simply can't see how SCSI can compete with that.

Easy, SCSI still competes.  It's called redundancy and scalability.  It's
hard to get more than 4 (maybe 8 with a big system) IDE drives attached to
one box.  That same thing is trivial with SCSI, and you can even go with far
more than that.  Here's on example.  At the office, I've got a single
machine with 4 internal, hot-swap drives, and two external 5 disk chassis
that are both full, as well as a tape drive, a CD-ROM, and a CD-RW.  The
tape is about 3 feet away, and the drive chassis are more like 12,
everything is well withing spec for the SCSI on this machine.  With IDE, I
couldn't get that much space if I tried, and I wouldn't be likely to have
the kind of online redundancy that I have with this machine.  I'll admit
that this is the biggest machine that we have, but we're only taking care of
250 people, with about a dozen people outside of the Information Services
deparment who actually utilize the computing resources.  Any remotely larger
shop, or one with competent employees, could easily need server that scale
well beyond this machine.  I don't think that SCSI has a really good place
on desktops, and it's use is limited when GOOD IDE is available for a
workstation, but servers still have a demand for SCSI.  
Greg

P.S. My employer probably wouldn't take kindly to those words, so I'm
obviousally not representing them here.



RE: celeron vs k6-2

2000-04-24 Thread Gregory Leblanc

> -Original Message-
> From: Seth Vidal [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 24, 2000 2:39 PM
> To: [EMAIL PROTECTED]
> Subject: celeron vs k6-2
> 
> 
> Hi folks,
>  I did some tests comparing a k6-2 500 vs a celeron 400 - on a raid5
> system - found some interesting results
> 
> Raid5 write performance of the celeron is almost 50% better 
> than the k6-2.
> 
> Is this b/c of mmx (as james manning suggested) or b/c of the FPU?

NOT because of MMX, as the K6-2 has MMX instructions.  It could be because
of the parity calculations, but you'd need to do a test on a single disk to
make sure that it doesn't have anything to do with the CPU/memory chipset or
disk controller.  Can you try with a single drive to determine where things
should be?
Greg



RE: performance limitations of linux raid

2000-04-24 Thread Gregory Leblanc

> -Original Message-
> From: Chris Mauritz [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 24, 2000 2:30 PM
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
> 
> 
> There's "specs" and then there's real life.  I have never 
> seen a hard drive
> that could do this.  I've got brand new IBM 7200rpm ATA66 
> drives and I can't
> seem to get them to do much better than 6-7mb/sec with either Win98,
> Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, 
> and Supermicro
> PIIIDME boards.  And yes, I'm using an 80 conductor cable.  I'm using
> Wintune on the windows platforms and bonnie on Linux to do benchmarks.

I don't believe the specs either, because they are for the "ideal" case.
However, I think that either your benchmark is flawed, or you've got a
crappy controller.  I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
a machine at home, and I can easily read at 7MB/sec from it under Solaris.
Linux is slower, but that's because of the drivers for the SCSI controller.
I haven't done any benchmarks on my IDE drives because I already know that
they're SLOW.
Greg

> 
> Cheers,
> 
> Chris
> 
> - Original Message -
> From: "Michael" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, April 24, 2000 5:10 PM
> Subject: Re: performance limitations of linux raid
> 
> 
> > > > > > I find those numbers rather hard to believe.  I've 
> not yet heard
> of a
> > > > > > disk (IDE or SCSI) that can reliably dump 22mb/sec 
> which is what
> your
> > > > > > 2 drive setup implies.  Something isn't right.
> >
> > Sure it is. go to the ibm site and look at the specs on all the new
> > high capacity drives. Without regard to the RPM, they are all spec'd
> > to rip data off the drive at around 27mb/sec continuous.
> > [EMAIL PROTECTED]
> >
> >
> 



RE: Status on drivers for ExtremeRaid 2000?

2000-04-22 Thread Gregory Leblanc

> -Original Message-
> From: Agus Budy Wuysang [mailto:[EMAIL PROTECTED]]
> Sent: Friday, April 21, 2000 12:11 AM
> To: Leonard N. Zubkoff
> Cc: [EMAIL PROTECTED]
> Subject: Re: Status on drivers for ExtremeRaid 2000?
> 
> "Leonard N. Zubkoff" wrote:
> > 
> >   From: "List User" <[EMAIL PROTECTED]>
> >   Date: Wed, 19 Apr 2000 20:03:01 -0500
> > 
> >   I was wondering if anyone here knew the status on the 
> drivers for the
> >   Mylex ExtremeRaid 2000 card.   Are they in beta yet?
> > 
> >   In the next 1-2 months I'm going to need a new raid card 
> and could really
> >   use the 2000 over the 1100.  (need the additional scsi channel).
> > 
> > Development of the driver support for the new controllers 
> is going well and I
> > expect to have a public beta within a couple of weeks.  My 
> to-do list is
> > getting quite short now.  I anticipate that the new driver 
> will be quite stable
> > even in beta form, and I plan to run my own server on the 
> 2000's as soon as I
> > can.
> 
> Just curios, is the 2000 any faster than 1100 if I use
> only U2LVD drives?
> (knowing that they have the same StrongArm CPU @same freq)

CPU isn't the only issue for performance.  It could have a better SCSI
chipset, better drivers, or more advanced firmware.  Of course, I've never
used either of them, so I don't know how fast they are.

> Plus wouldn't the CPU be the bottleneck if say I have 2x 2ch 1100
> vs 1x 4ch 2000?

That depends on what the card is doing, and the firmware/drivers etc.
Assuming that they were equivalent on the drivers/firmware, then it would
come down to how many drives and the type of array they are configured in.
I don't know if you can create RAID arrays that span multiple adapters, so
perhaps that's a reason to want the 4 channel card vs 2x2channel cards.
Greg



RE: debian software raid

2000-04-20 Thread Gregory Leblanc

Take a look at the Software RAID HOWTO, which can be found at
http://www.LinuxDoc.org/.  It has lots of great Software RAID information.
Greg

> -Original Message-
> From: fnijen [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, April 20, 2000 4:49 AM
> To: [EMAIL PROTECTED]
> Subject: debian software raid
> 
> 
> 
> Hi,
> 
> For my internship I have install a root raid system which use 
> the debian 
> operating system. All the information and readme's I've found 
> are pretty 
> outdated and while I'm not a totall newbie on configuring and 
> maintanance 
> of debian systems I'm a bit nervous on taking up this task. 
> It will be a 
> production machine, so I want to make sure I'm doing this correctly.
> 
> First of all, does someone has a short practical guide on 
> raid-5 and debian 
> linux?
> Any pitfalls to look out for?
> And are the different readme's and howto's still up to date? 
> I saw that 
> most documents are dated 97/98, we are some years and kernels 
> further now...
> 
> The patch for the shutdown of a raid system in the root-raid 
> mini howto i 
> found gave errors
> while patching a 2.2.14 kernel. This does not primary mean 
> that the patch 
> does not work, it could be me :) but
> I would love to hear a confirmation that all procedures in 
> the howtos are 
> still acurate!
> 
> Any help and hints appreciated!
> 
> Frank
> 



RE: Which raid version?

2000-04-18 Thread Gregory Leblanc

> -Original Message-
> From: Vinny [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 18, 2000 10:22 PM
> To: Linux Raid
> Subject: Which raid version?
> 
> Out of curiosity, 
> which raid version comes with RedHat 6.2?
> 0.90 or the new mingo stuff?

They are the same thing, the "new mingo stuff" is just Ingo's port (merge?
what's the right word?) of the 0.90 RAID code to 2.2.14.  RedHat 6.2 ships
with the new RAID code (0.90).
Greg



RE: adaptec 2940u2w hangups

2000-04-18 Thread Gregory Leblanc

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 18, 2000 10:56 AM
> To: [EMAIL PROTECTED]
> Subject: Re: adaptec 2940u2w hangups
> 
> 
> On Tue, 18 Apr 2000, Chris Mauritz wrote:
> 
> > Also, make sure you have active (not passive) termination.
> 
> Since I am no SCSI guru, can someone please explain, how to determine
> which is active and which is passive?

It should say on the terminator.

> 
> I went to the computer store, asked for a SCSI terminator and 
> got it. It
> has three little LEDs (one is POWER, one is LVD and the third is SE).
> After I connected it to SCSI bus, POWER and LVD are on. Is 
> this active or
> passive?

This is an active terminator, because it has lights on it.  A passive
terminator will not have lights because it's nothing more than some
resistors in a fancy connector.
Greg



RE: adaptec 2940u2w hangups

2000-04-18 Thread Gregory Leblanc

> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 18, 2000 10:40 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: adaptec 2940u2w hangups
> 
> You're problems occur only with RAID and the 2940U2W? 
> 
> I was told it should be a SCSI problem, not a RAID problem. So, I quit
> trying to debug RAID issues and went on to SCSI 
> possibilities. Maybe the
> guru was wrong?
> 
> My latest attempt _may_ bear some fruit. I tried lowering the 
> speed from
> 80Mb to 40Mb through the controller bios (thanks to C Polisher for the
> suggestion). It's hard for me to catch the hangs, but so far this
> morning, it _appears_ to have stopped (I've thought I found 
> the problem
> before). Of course, it's a pretty rotten solution to have to 
> go to half
> speed. Better not to have any RAID.

WOAH!  If cutting the speed down fixes things, try getting some new, better
cables, and a new terminator (and a new backplane if you're using one).
That's almost certainly a SCSI hardware problem, and not something in the
SCSI drivers or the RAID code.  It may be that the problems "go away"
without RAID because the SCSI bus isn't getting the same type of use as it
would be with RAID.  
Greg

[snip]



RE: panic: B_FREE inserted into queues on kernel 2.2.14

2000-04-18 Thread Gregory Leblanc

> -Original Message-
> From: Carruth, Rusty [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 18, 2000 9:03 AM
> To: 'raid'
> Subject: panic: B_FREE inserted into queues on kernel 2.2.14
> 
> I hope that this is not my second post to this list, if so I 
> apologize!
> (I went and looked at the archive of this list at
> http://linuxwww.db.erau.edu/mail_archives/
> and it only went through Jan 2000, so not only do I not know if
> my posting got in, I cannot check to see what answers may have
> been posted.
> 
> So, with apologies ahead of time, let me ask again:
> 
> I am running RedHat 6.2, kernel 2.2.14, with the raid patch
> 'raid0145-1999082402.2.11'.
> (I rejected the query about undoing a patch that seemed to 
> have already been
> done).

IIRC, this patch really isn't happy on the 2.2.14 kernel.  Please try the
patch from http://www.redhat.com/~mingo/ and see if that fixes your
problems.
Greg



RE: PLEASE HELP: mkraid aborts!

2000-04-18 Thread Gregory Leblanc

You'll need to patch you kernel with the patches from
http://www.redhat.com/~mingo/, probably.  I don't think RAID 0.90 has been
integrated into 2.2.15pre
Greg

> -Original Message-
> From: root [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 18, 2000 3:18 PM
> To: [EMAIL PROTECTED]
> Subject: PLEASE HELP: mkraid aborts!
> 
> 
> I am trying to make a software raid5, and it seems no matter what I do
> mkraid aborts.
> 
> My mkraid output looks like this:
> 
> handling MD device /dev/md0
> analyzing super-block
> disk 0: /dev/hde2, 5269320kB, raid superblock at 5269248kB
> disk 1: /dev/hdf2, 5269320kB, raid superblock at 5269248kB
> disk 2: /dev/hdg2, 5269320kB, raid superblock at 5269248kB
> disk 3: /dev/hdh2, 5269320kB, raid superblock at 5269248kB
> mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
> 
> No clues are found in those places
> 
> My raidtab looks like this:
> 
> raiddev /dev/md0
>raid-level  5
>nr-raid-disks   4
>nr-spare-disks  0
>persistent-superblock   1
>chunk-size  8
> 
>parity-algorithmleft-symmetric
> 
>device  /dev/hde2
>raid-disk   0
>device  /dev/hdf2
>raid-disk   1
>device  /dev/hdg2
>raid-disk   2
>device  /dev/hdh2
>raid-disk   3
> 
> The kernel is 2.2.15pre17 with the  ide.2.2.15-17.2405.patch.
> Everything else is stock Red Hat 6.2 including mkraid v 0.90.0
> 
> My hardware setup is:
> 
> 600Mhz Pentium III on a ASUS P3C2000 Series Motherboard with a Tekram
> DC-390U2W  (sym53c8XX) and a 9GB IBM SCSI  as the system 
> drive. The raid
> is 4 WD 6.4GB  EIDE drives (WD64AA) hanging off a Promise Ultra66
> controller (PDC20262) .  I've also tried this with 4 and 8 Maxtor 40GB
> drives  (94098U8) and other PCI  DMA66 controllers (HPT366 & CMD648)
> with much the same results.
> 
> Any help or suggestions will be much appreciated.
> 
> Also the latest raidtools and patches I have been able to find at
> kernel.org are dated August 1999. Is this correct?
> 
> Thjank you
> 
> Clay Claiborne
> 



RE: mkraid /dev/md0;; appears to have ext2 filesystem...

2000-04-17 Thread Gregory Leblanc

> -Original Message-
> From: Jason Lin [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 17, 2000 12:11 PM
> To: The coolest guy you know; [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: mkraid /dev/md0;; appears to have ext2 filesystem...
> 
> Hi everyone:
> I think I got a better understanding now.
> What I want to do is to make a raid1 device using a
> existing partition(of which the data I need to keep)
> and 2nd partition from 2nd hard disk.
> 
>  RedHat6.1 is installed and there are only two IDE
> hard disks(same capacity).
>  After all the e-mails back and forth, I believe there
> are two ways:(although they are pretty similar)
> 
> Assmuming /dev/hda7 contains the needed data and
> /dev/hdc8 is 2nd partition.  We want to make a raid1
> device /dev/md0 on top of /dev/hda8 and /dev/hdc8.

Let me confirm, before I go too far.  You've got 2 hard drives, and want to
create a mirror.  You have some data that you want on keep on /dev/hda7.
The parts of the drive that you want to mirror are /dev/hda8 and /dev/hdc8.
Assuming that this is correct, just create your raidtab with /dev/hda8 and
/dev/hdc8 as RAID disks.  Then copy the data that you want mirrored from
/dev/hda7 to /dev/md0.  If this is not correct, say so, and post a message
saying exactly where the data you want to keep is, which partitions you want
to make into a RAID1 (mirror), and any other information I've forgotten to
ask for but you know we'll need.  :-)
Greg

P.S.  Jakob, do you have anything written up for creating a RAID from an
existing disk?  If not, let me know and I'll play around with my test system
and write something up for you.



RE: Fw: Adaptec AAA-133U2 Raid Controller support

2000-04-16 Thread Gregory Leblanc

> -Original Message-
> From: Edward Schernau [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, April 16, 2000 3:03 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Fw: Adaptec AAA-133U2 Raid Controller support
> 
> Gregory Leblanc wrote:
> 
> > To add a little more detail, Adaptec has decided that they 
> do not want this
> > product to work under Linux.  The AAA-13X series cards 
> perform ALL raid in
> > SOFTWARE, using custom drivers that Adaptec has written.  I 
> was under the
> 
> So it's like WinSCSI?  Or LoseSCSI?  ;-)

More like WinRAID.  :-)  It's not really all that different from the
Software RAID tools provided for Linux, or the ones by Veritas for Solaris,
or the NT4 software RAID tools, except that it's implemented through a
driver interface, rather than just built into the OS (that said, I don't
know how the Veritas/Solaris tools work, that may also be a driver).  It's a
concept that has it's merits for proprietary OSs, but sort of falls appart
for "free" projects, like Linux and the Hurd.

> 
> > P.S.  It's not a "windows-only" product, it is also 
> supported under Novell
> > Netware 4 and 5.
> 
> Cool, wonder why Adaptec is being so lame about this, and why
> they'd market a software RAID package as their RAID solution.

Actually, Adaptec recently purchased DPT, a long time hardware RAID
manufacturer.  I expect that they will begin to sell some of the DPT cards
under an Adaptec label.  As always, this is just IMHO, and doesn't
necessarily have any bearing on reality.
Greg



  1   2   >