Re: product testimonials

2000-03-21 Thread Seth Vidal

 Notice that it checks every 3 seconds, but emails every 10 minutes
 (prevents the inbox from filling up overnight).
 
 What does it look like when a drive dies?  I presume something like:
 
 [..UD]
 
 Then, perhaps just doing a (Perl) regexp: if (/\[[^\]]*D[^\]]*\]/)
 then report the failure?

what I've seen is that it looks like this:
[UU_UU] until the drive is marked as dead.

and then it changes to:
[UUDUU] (I believe)

I'd want to know about the _ until otherwise noted.

and I'd want to be able to touch and ignore file so if its rebuilding I
don't hear about it every few minutes.

I'll see what I can hack up.

it wouldn't be a bad idea to put a few of these up on a raid-related
website so people could see their options.

-sv





failed disks

2000-03-21 Thread Seth Vidal

Hi,
 I'm doing a series of bonnie tests along with a fair amount of file
md5summing to determine speed and reliability of a raid5 configuration.
I have 5 drives on a TekRam 390U2W adapter. 3 of the drives are the same
seagate barracuda 9.1 gig drive. The other two are the 18 gig barracuda's.

Two of the nine gigs fail - consistently - when I run bonnie tests on
them. One will get flagged as bad in one run and die out. This one I can
confirm is bad b/c it fails on its own outside of the raid array (it
fails to be detected by linux at all - no partitions are found and it
can't be started) - the other passes a badblocks -w test and appears to
work. However it ALWAYS fails when its a part of the array and a bonnie
test is run.

Does this sound like a hardware fault? If so why is it only occurring when
raid is used?

thanks
-sv





Re: failed disks

2000-03-21 Thread Jakob Østergaard

On Tue, 21 Mar 2000, Seth Vidal wrote:

 Hi,
  I'm doing a series of bonnie tests along with a fair amount of file
 md5summing to determine speed and reliability of a raid5 configuration.
 I have 5 drives on a TekRam 390U2W adapter. 3 of the drives are the same
 seagate barracuda 9.1 gig drive. The other two are the 18 gig barracuda's.
 
 Two of the nine gigs fail - consistently - when I run bonnie tests on
 them. One will get flagged as bad in one run and die out. This one I can
 confirm is bad b/c it fails on its own outside of the raid array (it
 fails to be detected by linux at all - no partitions are found and it
 can't be started) - the other passes a badblocks -w test and appears to
 work. However it ALWAYS fails when its a part of the array and a bonnie
 test is run.
 
 Does this sound like a hardware fault? If so why is it only occurring when
 raid is used?

You can most likely trigger it too if you run non-RAID I/O on all the disks
simultaneously.

It sounds like you have a SCSI bus problem, bad cabling / termination etc.

-- 

: [EMAIL PROTECTED]  : And I see the elder races, :
:.: putrid forms of man:
:   Jakob Østergaard  : See him rise and claim the earth,  :
:OZ9ABN   : his downfall is at hand.   :
:.:{Konkhra}...:



not finding extra partitions

2000-03-21 Thread Douglas Egan

I had my raid5 up and running with 3 15g ide disks.  I shutdown and
removed the cable from one drive and rebooted for a test.

All seemed to go well, system ran in degraded mode.  When I reconnected
drive, only 1 of the 3 partitions on the drive are recognized.  2 of my
3 /dev/md- arrays still run in degraded mode.

How can I force a "good" partition so the array will rebuild?
-- 
---
+-+
| Douglas EganWind River  |
| Sr. Staff Engineer  |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+



RAID 0.90+ status for 2.4 (pre) ??

2000-03-21 Thread Matti Aarnio

I myself really need working RAID(1) code, but current 2.3.99 kernels
don't even have compilable RAID1 code in them :-(

Any change for raid-code keepers to get their act together and publish
something working ?  Or must I simply reinstall my development machines
without RAID ?

I myself can't update missing LFS bits into the current kernel unless I
get something bootable (and usable) for my machines.

/Matti Aarnio [EMAIL PROTECTED]