Re: EIDE-UDMA33 to UW-SCSI bridge

2000-06-28 Thread Douglas Egan

I just found this url:

http://www.acard.com.tw/Products/SCSI_IDE_Chip/ARC760.htm

Thomas Waldmann wrote:
> 
> > On Thu, 22 Jun 2000, Thomas Waldmann wrote:
> >
> > > The most awesome thing I had yet, is IDE AND SCSI:
> > >
> > > IDE disks (BIG, fast & cheap)
> > > IDE-to-SCSI-bridging adaptor (one per IDE disk)
> > > UW SCSI controller
> >
> > What controller did you use for this? Is it with the 3Ware card? (I've
> > heard that the hardware array of UDMA disks appears as one SCSI device)
> 
> It is a small PCB that is attached directly to the HDD's EIDE connector and
> has a UW SCSI connector on the back side. Each EIDE disk appears like it
> was a UW SCSI disk - you can use a standard SCSI controller like Symbios or
> Adaptec.
> 
> The bridge has "AEC-7720UW" printed on the PCB, so this seems to be the
> original part name. The main chip is ARC-760.
> 
> The manufacturer is ACard, but I think they sell this stuff only to OEMs.
> 
> The bridges work good, I get block transfer rates >= 22MB/s with IBM DTLA
> hdds.
> 
> The retail price of these bridges is about 110 EURO + tax here.
> 
> If somebody finds better prices somewhere else, please notify me.
> 
> I use 4 of these bridges + 4 IBM DTLA EIDE disks attached to an AIC-7895
> dual-UW-chip in my Linux Software RAID5 setup (just to get back on-topic ;-).
> 
> Thomas

-- 
---
+-+
| Douglas EganWind River  |
| |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+



Re: 2.2.16 RAID patch

2000-06-14 Thread Douglas Egan

Pardon my confusion.

I want to move from 2.2.14 raid5 with IDE patches to support Promise
Ultra66 to the 2.2.16 version.

I have the raid patches, but I am confused about which ide patch to
take.

Should I use:

ftp.kernel.org
/pub/linux/kernel/people/hedrick/ide-2.2.16/ide.2.2.16-pre7.2509.patch.gz

Thanks,

Doug Egan


Matthew DeFoor wrote:
> 
> > the latest 2.2 (production) RAID code against 2.2.16-final can be found
> > at:
> >
> > http://www.redhat.com/~mingo/raid-patches/raid-2.2.16-A0
> >
> > let me know if you have any problems with it.
> >
> 
> Should CONFIG_AUTODETECT_RAID be defined in md.c? I can manually start
> the raid and have it start the raid as another file system, but if I try
> to boot using root+raid I get the error "unable to read superblock". I
> used the fd partition type and put the alias md-personality-3 raid1 in
> conf.modules for good measure. Still no luck. Any insight would be
> greatly appreciated.
> 
> Thanks! - mattd



Re: redhat 6.2 and RAID patches

2000-05-25 Thread Douglas Egan

Patrick,

I am attaching a mail that I sent to Erich.  I followed his instructions
with a virgin 2.2.14 kernel and have had great success using the Promise
Ultra66 board in a 3disk RAID 5 configuration.

As someone has previously mentioned, the RH6.2 kernel is already greatly
patched, and applying the patches specified in the attachment to the RH
kernel will generate lots of errors.

Regards,

Doug Egan
-- 
---
+-+
| Douglas EganWind River  |
| |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+


My thanks to Erich for helping me out with this.  The explicit
directions below were all I needed.  I am now running RAID5 on the
patched 2.2.14 kernel, with the Promise Ultra66.  Erich deserves his "2
cents"!

I was worried about just swapping in the Ultra card for the EIDE Max
card, but to my pleasant surprise, my raid5 started working with the new
card without a hiccup, and I now have Ints 10 & 11 free again.

Having said that, I was surprised to see that my nfsd stopped working
with the new build.  I suspect it is a 2.2.14 issue and not raid
related.  Again from someone on this list I was pointed to
nfs.sourceforge.net:

http://download.sourceforge.net/nfs/kernel-nfs-dhiggen_merge-1.4.tar.gz

to complete the transition.

Thanks to everyone for their help!

Doug Egan

Erich wrote:
> 
> Ok, here are the notes that I wrote to myself of how to get Software
> RAID and the Promise Ultra/66 in the same kernel:
> 
> 1. Don't use the RedHat version of the 2.2.14 kernel.  It has
> too many patches, so the other patches won't work.
> 
> 2. Do unpack the linux-2.2.14.tar.gz file.
> 
> 3. Apply the ide.2.2.14.2124.patch file using this command:
> 
> cd /usr/src; patch -p0 < ide.2.2.14.2124.patch
> 
> 4. Apply the raid patch using this command:
> 
> cd /usr/src; patch -p0 < raid-2.2.14-B1
> 
> 5. Configure the kernel.
> 
> 6. Compile it and install it.
> 
> 7. Follow the instructions on the Software RAID How-To file about how
> to get RAID partitions to work.
> 
> To get these files:
> 
> The kernel source:
> 
> http://www.kernel.org/pub/linux/kernel/v2.2/linux-2.2.14.tar.gz
> 
> The ide patch:
> 
> 
>http://www.kernel.org/pub/linux/kernel/people/hedrick/old/ide.2.2.14.2124.patch.gz
> 
> The RAID patch:
> 
> http://people.redhat.com/mingo/raid-patches/raid-2.2.14-B1
> 
> --
> This message was my two cents worth.  Please deposit two cents into my
> e-gold account by following this link:
> http://rootworks.com/twocentsworth.cgi?102861
> 275A B627 1826 D627 ED35  B8DF 7DDE 4428 0F5C 4454

-- 
---




My benchmarks

2000-04-25 Thread Douglas Egan

I have an Intel P-III running at 450 MHz on an Intel SE440BX-2
motherboard with patched 2.2.14 kernel raidtools 0.90

RAID-5 array consists of 3 "Maxtor 51536U3" drives.  One drive is master
on secondary motherboard IDE port. (no slave)
The other 2 are alone on the primary and secondary of a Promise Ultra
ATA/66.  The raid is configured as follows:

[root@porgy /proc]# cat mdstat
Personalities : [raid5] 
read_ahead 1024 sectors
md0 : active raid5 hdg1[1] hde1[0] hdc1[2] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hdg5[2] hde5[1] hdc5[0] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md2 : active raid5 hdg6[2] hde6[1] hdc6[0] 10004096 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
unused devices: 
[root@porgy /proc]# 

The following test was run on /dev/md2 mounted as /usr1

[degan@porgy tiobench-0.29]$ ./tiobench.pl --block 4096
No size specified, using 510 MB
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 51040961  21.63 10.4% 0.757 0.87% 19.62 17.1% 0.799
2.45%
   . 51040962  22.89 12.0% 0.938 0.90% 20.18 17.7% 0.792
2.38%
   . 51040964  21.85 12.5% 1.113 1.09% 20.37 18.1% 0.784
2.19%
   . 51040968  20.57 13.1% 1.252 1.34% 20.54 18.5% 0.776
2.34%


I am not sure what to make of the results and am happy with my RAID
operation.
I only post them FYI.


+-----+
| Douglas EganWind River  |
| |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+



Re: Help with RAID 1 in 2.2.14 (RedHat 6.2)

2000-04-18 Thread Douglas Egan

My thanks to Erich for helping me out with this.  The explicit
directions below were all I needed.  I am now running RAID5 on the
patched 2.2.14 kernel, with the Promise Ultra66.  Erich deserves his "2
cents"!

I was worried about just swapping in the Ultra card for the EIDE Max
card, but to my pleasant surprise, my raid5 started working with the new
card without a hiccup, and I now have Ints 10 & 11 free again.

Having said that, I was surprised to see that my nfsd stopped working
with the new build.  I suspect it is a 2.2.14 issue and not raid
related.  Again from someone on this list I was pointed to
nfs.sourceforge.net:

http://download.sourceforge.net/nfs/kernel-nfs-dhiggen_merge-1.4.tar.gz

to complete the transition.

Thanks to everyone for their help!

Doug Egan

Erich wrote:
> 
> Ok, here are the notes that I wrote to myself of how to get Software
> RAID and the Promise Ultra/66 in the same kernel:
> 
> 1. Don't use the RedHat version of the 2.2.14 kernel.  It has
> too many patches, so the other patches won't work.
> 
> 2. Do unpack the linux-2.2.14.tar.gz file.
> 
> 3. Apply the ide.2.2.14.2124.patch file using this command:
> 
> cd /usr/src; patch -p0 < ide.2.2.14.2124.patch
> 
> 4. Apply the raid patch using this command:
> 
> cd /usr/src; patch -p0 < raid-2.2.14-B1
> 
> 5. Configure the kernel.
> 
> 6. Compile it and install it.
> 
> 7. Follow the instructions on the Software RAID How-To file about how
> to get RAID partitions to work.
> 
> To get these files:
> 
> The kernel source:
> 
> http://www.kernel.org/pub/linux/kernel/v2.2/linux-2.2.14.tar.gz
> 
> The ide patch:
> 
> 
>http://www.kernel.org/pub/linux/kernel/people/hedrick/old/ide.2.2.14.2124.patch.gz
> 
> The RAID patch:
> 
> http://people.redhat.com/mingo/raid-patches/raid-2.2.14-B1
> 
> --
> This message was my two cents worth.  Please deposit two cents into my
> e-gold account by following this link:
> http://rootworks.com/twocentsworth.cgi?102861
> 275A B627 1826 D627 ED35  B8DF 7DDE 4428 0F5C 4454

-- 
---



Re: Help with RAID 1 in 2.2.14 (RedHat 6.2)

2000-04-14 Thread Douglas Egan

Erich,

I am planning on trying to use the Promise Ultra66 tonight (want to beef
up performance).  I currently have RAID5 running with a Promise
EIDE-MaxII card quite nicely.  I know about the 2.2.14-B1 patch for
RAID, but which promise patch are you referring to?  I see that Promise
has a beta driver on their website, and this list pointed to:

http://www.csie.ntu.edu.tw/~b6506063/hpt366/

which references  a patch by Andre Hedrick for Ultra66 support.  Which I
assume is the same as you reference below.  Have you or anyone looked at
the beta driver on the promise site?

Doug Egan

Erich wrote:
> 
> I'm the one who originally posted the question, and now I may have an
> answer (with help from the list):
> 
> > I've tried doing that, 2.2.14 adding both patchs.
> > mostly they failed everywhere. then once I did it get it in, and compile,
> > it still never worked right, I change to 2.3 kernel, even tho alpha/beta
> > kernels are buggy, alteast it's working for now. Hopefully in next couple
> > weeks/months 2.4 , or 2.2.x will have patchs in it, Others in this list do
> > have it all working fine tho.(probly just me)
> >
> > Linux temp 2.3.48 #1 SMP Wed Mar 29 04:44:43 EST 2000 i686 unknown
> >   2:07pm  up 16 days,  8:13,  4 users,  load average: 0.01, 0.01, 0.00
> >
> > /dev/md0 77497248 61287908  12208368
> > /dev/md1 38620464 13837200  22789536
> > /dev/md2 77497248 54034202  21363046
> 
> It's definitely possible to use 2.2.14 with the Software RAID patch
> and with the Promise Ultra/66 patch at the same time.  I'm doing it
> right now.  Download the plain-vanilla 2.2.14 kernel.  Apply this
> patch first:
> 
> 
>http://www.kernel.org/pub/linux/kernel/people/hedrick/old/ide.2.2.14.2124.patch.gz
> 
> Then apply the RAID patch that <[EMAIL PROTECTED]> posted here
> earlier, called raid-2.2.14-B1 (I don't have a URL but I can mail you
> what was posted).
> 
> Then configure your kernel properly (email me if you have questions),
> compile it, install it, and it does miraculously work.  Mine detects
> the RAID partition on boot, and mounts it on boot, without having to
> modify init scripts at all.  I'm very impressed.
> 
> I promise you that 2.2.14 + Promise Ultra/66 + RAID works (in that
> order)!
> 
> The Promise Ultra/66 card has been a total nightmare, but now that
> it's working, it's the way to build big fast RAID arrays as cheaply as
> possible.  I think it's about half the cost of doing a comparable
> solution with SCSI.  This savings will be worth it if you're building
> a cluster of these machines.
> 
> It's just unfortunate that Promise isn't aggressively supporting
> Linux, and didn't get an Ultra/66 driver into the RedHat 6.2 release.
> 
> e
> 
> --
> This message was my two cents worth.  Please deposit two cents into my
> e-gold account by following this link:
> http://rootworks.com/twocentsworth.cgi?102861
> 275A B627 1826 D627 ED35  B8DF 7DDE 4428 0F5C 4454

-- 
---



Re: RAID5 array not coming up after "repaired" disk

2000-03-24 Thread Douglas Egan

When this happened to me I had to "raidhotadd" to get it back in the
list.  What does your /proc/mdstat indicate?

Try:
raidhotadd /dev/md0 /dev/sde7

Doug Egan

Marc Haber wrote:
> 
> Hi!
> 
> I am using kernel 2.2.14 with the RedHat Patch (raid-2.2.14-B1) on a Debian
> System, raid tools 0.90.990824-5. I have six SCSI disks on two host
> adapters, raidtab as follows:
> 
> |raiddev /dev/md0
> |raid-level5
> |nr-raid-disks 6
> |nr-spare-disks0
> |persistent-superblock 1
> |chunk-size32
> |
> |device/dev/sda7
> |raid-disk 0
> |device/dev/sdb7
> |raid-disk 1
> |device/dev/sdc7
> |raid-disk 2
> |device/dev/sdd7
> |raid-disk 3
> |device/dev/sde7
> |raid-disk 4
> |device/dev/sdf7
> |raid-disk 5
> 
> Performance of that array is quite impressive.
> 
> However, I wanted to test RAID behavior in the case of a disk failure. So I
> disconnected sde while the array was running. The system tried to access the
> failed disk for a few seconds and then continued in degraded mode.
> 
> I then rebooted, to find the array (now running on sda7 thru sde7 because
> sdf had moved up to sde) still in degraded mode. So far, so good.
> 
> Next step was reconnecting the "failed" disk and rebooting again. This time,
> the array didn't come up, and raidstart /dev/md0 gave the following output:
> 
> |(read) sda7's sb offset: 8666944 [events: 0029]
> |(read) sdb7's sb offset: 803136 [events: 0029]
> |(read) sdc7's sb offset: 803136 [events: 0029]
> |(read) sdd7's sb offset: 803136 [events: 0029]
> |(read) sde7's sb offset: 803136 [events: 0023]
> |autorun ...
> |considering sde7 ...
> |adding sde7 ...
> |adding sdd7 ...
> |adding sdc7 ...
> |adding sdb7 ...
> |adding sda7 ...
> |created md0
> |bind
> |bind
> |bind
> |bind
> |bind
> |running: 
> |now!
> |sde7's event counter: 0023
> |sdd7's event counter: 0029
> |sdc7's event counter: 0029
> |sdb7's event counter: 0029
> |sda7's event counter: 0029
> |md: superblock update time inconsistency -- using the most recent one
> |freshest: sdd7
> |md: kicking non-fresh sde7 from array!
> |unbind
> |export_rdev(sde7)
> |md0: former device sde7 is unavailable, removing from array!
> |md0: max total readahead window set to 640k
> |md0: 5 data-disks, max readahead per data-disk: 128k
> |raid5: device sdd7 operational as raid disk 3
> |raid5: device sdc7 operational as raid disk 2
> |raid5: device sdb7 operational as raid disk 1
> |raid5: device sda7 operational as raid disk 0
> |raid5: not enough operational devices for md0 (2/6 failed)
> |RAID5 conf printout:
> |--- rd:6 wd:4 fd:2
> |disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda7
> |disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdb7
> |disk 2, s:0, o:1, n:2 rd:2 us:1 dev:sdc7
> |disk 3, s:0, o:1, n:3 rd:3 us:1 dev:sdd7
> |disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev 00:00]
> |disk 5, s:0, o:0, n:5 rd:5 us:1 dev:[dev 00:00]
> |disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
> |raid5: failed to run raid set md0
> |pers->run() failed ...
> |do_md_run() returned -22
> |unbind
> |export_rdev(sdd7)
> |unbind
> |export_rdev(sdc7)
> |unbind
> |export_rdev(sdb7)
> |unbind
> |export_rdev(sda7)
> |md0 stopped.
> |... autorun DONE.
> 
> It looks like the RAID drivers didn't like the sixth disk moving up to sdf
> again after the fifth disk was revived. This probably resulted in the driver
> thinking that two of six disks were gone.
> 
> After turning off the fifth disk again and a reboot, the array was back
> again with all data, but - of course - still in degraded mode:
> 
> |(read) sda7's sb offset: 8666944 [events: 002d]
> |(read) sdb7's sb offset: 803136 [events: 002d]
> |(read) sdc7's sb offset: 803136 [events: 002d]
> |(read) sdd7's sb offset: 803136 [events: 002d]
> |(read) sde7's sb offset: 803136 [events: 002d]
> |autorun ...
> |considering sde7 ...
> |adding sde7 ...
> |adding sdd7 ...
> |adding sdc7 ...
> |adding sdb7 ...
> |adding sda7 ...
> |created md0
> |bind
> |bind
> |bind
> |bind
> |bind
> |running: 
> |now!
> |sde7's event counter: 002d
> |sdd7's event counter: 002d
> |sdc7's event counter: 002d
> |sdb7's event counter: 002d
> |sda7's event counter: 002d
> |md0: max total readahead window set to 640k
> |md0: 5 data-disks, max readahead per data-disk: 128k
> |raid5: device sde7 operational as raid disk 5
> |raid5: device sdd7 operational as raid disk 3
> |raid5: device sdc7 operational as raid disk 2
> |raid5: device sdb7 operational as raid disk 1
> |raid5: 

Re: not finding extra partitions

2000-03-22 Thread Douglas Egan

Thanks Seth!

I dug through the linux-raid archives last night and found the answer
too. Got everything resynced last night.

I am using RH 6.1 2.2.12-20 with a Promise EIDE-MaxII with 3 Maxtor
51536U3 ide drives; 2 of these drives on Promise card, and 3rd on
secondary of motherboard.  All seems to be working well. Total cost: 
150*3 + 26 = 476 dollars for 30Gig of RAID5.

/dev/md0 -> /dev/hdc1 /dev/hde1 /dev/hdg1   mounted as /usr
/dev/md1 -> /dev/hdc5 /dev/hde5 /dev/hdg5   mounted as /home
/dev/md2 -> /dev/hdc6 /dev/hde6 /dev/hdg6   mounted as /usr1

I have a SOHO and am using the RAID5 as a "network appliance".

/usr1/xtools  exported as nfs and SAMBA - contains Solaris and win32
  cross development tools.
/usr1/rtosdev exported as nfs and SAMBA - contains development code
/home exported as nfs and SAMBA - misc and NIS home directory

Has anyone seen any problems in long term use of linux RAID5 in this or
any other environment?

Regards,

Doug Egan

Seth Vidal wrote:
> 
> > removed the cable from one drive and rebooted for a test.
> >
> > All seemed to go well, system ran in degraded mode.  When I reconnected
> > drive, only 1 of the 3 partitions on the drive are recognized.  2 of my
> > 3 /dev/md- arrays still run in degraded mode.
> >
> > How can I force a "good" partition so the array will rebuild?
> 
> raidhotadd /dev/mdX /dev/sd??
> 
> then it reconstructs.
> 
> -sv

-- 
---
+-+
| Douglas EganWind River Systems, Inc.|
+-+



not finding extra partitions

2000-03-21 Thread Douglas Egan

I had my raid5 up and running with 3 15g ide disks.  I shutdown and
removed the cable from one drive and rebooted for a test.

All seemed to go well, system ran in degraded mode.  When I reconnected
drive, only 1 of the 3 partitions on the drive are recognized.  2 of my
3 /dev/md- arrays still run in degraded mode.

How can I force a "good" partition so the array will rebuild?
-- 
---
+-----+
| Douglas EganWind River  |
| Sr. Staff Engineer  |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+