raid tools problem

2000-04-25 Thread Matthew Leung

Hi,

I upgraded my kernel from 2.0.32 to 2.2.14 and using raidtools-0.90.  I 
don't know why I
can't initialize the raid disk siucessfully.  Could you please let me know 
what's the
problem?  Besides, is there any documents for troubleshooting of the 
raidtools?  Thanks

[root@web /etc]# /sbin/mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sda5, 16257748kB, raid superblock at 16257664kB
/dev/sda5 is mounted
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
==
This is the raidtab file:

raiddev /dev/md0
raid-level  1
nr-raid-disks   2
nr-spare-disks  0
chunk-size  4

device  /dev/sda5
raid-disk   0

device  /dev/sdb5
raid-disk   1

raiddev /dev/md1
raid-level  1
nr-raid-disks   2
nr-spare-disks  0
chunk-size  4

device  /dev/sda6
raid-disk   0

device  /dev/sdb6
raid-disk   1


My partitions info:

[root@web /etc]# df
Filesystem 1024-blocks  Used Available Capacity Mounted on
/dev/sda1 497667  275056   196909 58%   /
/dev/sda515618577   39193  14766497  0%   /home
/dev/sda6 8940174904   842927  1%   /var
/dev/sdb1 497667  195936   276029 42%   /backup
/dev/sdb6 8940174253   843578  1%   /var2
/dev/sdb5156185771907  14803783  0%   /home2


Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com




Re: celeron vs k6-2

2000-04-25 Thread Kristian Soerensen

Hi

It's most likely due to the current celerons having better memory bandwith
than the K6-2's.

The more data pr. time unit that can get through the memory system the
more time will be spend by the CPU doing calculations instead of sitting
idle waiting for data.

This is one good reason for using an athlon with a good motherboard for
serious software RAID machines.

---   http://www.elof.dk   --
Kristian Elof Soerensen  [EMAIL PROTECTED]   (+45) 45 93 92 02 

On Mon, 24 Apr 2000, Seth Vidal wrote:

 Hi folks,
  I did some tests comparing a k6-2 500 vs a celeron 400 - on a raid5
 system - found some interesting results
 
 Raid5 write performance of the celeron is almost 50% better than the k6-2.
 
 Is this b/c of mmx (as james manning suggested) or b/c of the FPU?
 
 I used tiobench in sizes of  than 3X my memory size on both systems -
 memory and drives of both systems were identical.
 
 
 Thanks
 
 -sv
 
 
 
 
 




Re: raid tools problem

2000-04-25 Thread Jason Lin

Make sure /dev/sda5 and /dev/sdb5 are not mounted when

"/sbin/mkraid /dev/md0" is issued.

--- Matthew Leung [EMAIL PROTECTED] wrote:
 Hi,
 
 I upgraded my kernel from 2.0.32 to 2.2.14 and using
 raidtools-0.90.  I 
 don't know why I
 can't initialize the raid disk siucessfully.  Could
 you please let me know 
 what's the
 problem?  Besides, is there any documents for
 troubleshooting of the 
 raidtools?  Thanks
 
 [root@web /etc]# /sbin/mkraid /dev/md0
 handling MD device /dev/md0
 analyzing super-block
 disk 0: /dev/sda5, 16257748kB, raid superblock at
 16257664kB
 /dev/sda5 is mounted

Unmounted /dev/sda5.

 mkraid: aborted, see the syslog and /proc/mdstat for
 potential clues.
 ==
 This is the raidtab file:
 
 raiddev /dev/md0
 raid-level  1
 nr-raid-disks   2
 nr-spare-disks  0
 chunk-size  4
 
 device  /dev/sda5
 raid-disk   0
 
 device  /dev/sdb5
 raid-disk   1
 
 raiddev /dev/md1
 raid-level  1
 nr-raid-disks   2
 nr-spare-disks  0
 chunk-size  4
 
 device  /dev/sda6
 raid-disk   0
 
 device  /dev/sdb6
 raid-disk   1
 
 
 My partitions info:
 
 [root@web /etc]# df
 Filesystem 1024-blocks  Used Available
 Capacity Mounted on
 /dev/sda1 497667  275056   196909
 58%   /
 /dev/sda515618577   39193  14766497 
 0%   /home
 /dev/sda6 8940174904   842927 
 1%   /var
 /dev/sdb1 497667  195936   276029
 42%   /backup
 /dev/sdb6 8940174253   843578 
 1%   /var2
 /dev/sdb5156185771907  14803783 
 0%   /home2
 


 Get Your Private, Free E-mail from MSN Hotmail at
 http://www.hotmail.com
 
 

__
Do You Yahoo!?
Send online invitations with Yahoo! Invites.
http://invites.yahoo.com



Re: raid tools problem

2000-04-25 Thread Daniel Rus

I have the same problem with the same /etc/raidtab file.
My kernel version is 2.2.13
The array Raid1 partitions are unmounted, but mkraid fails.
i'm looking for a solution.
If i change the raid level to 0, no problem.
But i need level 1.

If i find a solution i will tell you.
I'm working on it.

Saludos de Daniel desde La Aldea.

El Tue, 25 Apr 2000, Jason Lin escribió:
 Make sure /dev/sda5 and /dev/sdb5 are not mounted when
 
 "/sbin/mkraid /dev/md0" is issued.
 
 --- Matthew Leung [EMAIL PROTECTED] wrote:
  Hi,
  
  I upgraded my kernel from 2.0.32 to 2.2.14 and using
  raidtools-0.90.  I 
  don't know why I
  can't initialize the raid disk siucessfully.  Could
  you please let me know 
  what's the
  problem?  Besides, is there any documents for
  troubleshooting of the 
  raidtools?  Thanks
  
  [root@web /etc]# /sbin/mkraid /dev/md0
  handling MD device /dev/md0
  analyzing super-block
  disk 0: /dev/sda5, 16257748kB, raid superblock at
  16257664kB
  /dev/sda5 is mounted
 
 Unmounted /dev/sda5.
 
  mkraid: aborted, see the syslog and /proc/mdstat for
  potential clues.
  ==
  This is the raidtab file:
  
  raiddev /dev/md0
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  4
  
  device  /dev/sda5
  raid-disk   0
  
  device  /dev/sdb5
  raid-disk   1
  
  raiddev /dev/md1
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  4
  
  device  /dev/sda6
  raid-disk   0
  
  device  /dev/sdb6
  raid-disk   1
  
  
  My partitions info:
  
  [root@web /etc]# df
  Filesystem 1024-blocks  Used Available
  Capacity Mounted on
  /dev/sda1 497667  275056   196909
  58%   /
  /dev/sda515618577   39193  14766497 
  0%   /home
  /dev/sda6 8940174904   842927 
  1%   /var
  /dev/sdb1 497667  195936   276029
  42%   /backup
  /dev/sdb6 8940174253   843578 
  1%   /var2
  /dev/sdb5156185771907  14803783 
  0%   /home2
  
 
 
  Get Your Private, Free E-mail from MSN Hotmail at
  http://www.hotmail.com
  
  
 
 __
 Do You Yahoo!?
 Send online invitations with Yahoo! Invites.
 http://invites.yahoo.com



Failed disk - how will it reboot?

2000-04-25 Thread Jochen Scharrlach

Hi,

on a server one disk had a "medium error" and the RAID1 (2.2.14-B1)
disabled one of the mirrors. It looks like this:

md0 : active raid1 sdb5[1] sda5[0](F) 4739072 blocks [2/1] [_U]

If I reboot now, how will the system react? Will it recognize the
failed partition or (worst case) will it try to overwrite the data on
sdb5 with sda5?

BTW, what's the easiest way to replace the failed disk? Will something
like "dd /dev/sda /dev/sdb count=1" and "raidhotadd /dev/md0 /dev/sdb5" 
work? (assuming the old sdb becomes sda after the replacement)

Thanks,
Jochen

-- 

# mgm ComputerSysteme und -Service GmbH
# Sophienstr. 26 / 70178 Stuttgart / Germany / Voice: +49.711.96683-5


The Internet treats censorship as a malfunction and routes around it. 
   --John Perry Barlow 



Re: Failed disk - how will it reboot?

2000-04-25 Thread Piete Brooks

 on a server one disk had a "medium error" and the RAID1 (2.2.14-B1)
 disabled one of the mirrors. It looks like this:
 md0 : active raid1 sdb5[1] sda5[0](F) 4739072 blocks [2/1] [_U]

We go a JBoD with 24 disks, and about a quarter of them have failed in just
such a way 

 If I reboot now, how will the system react?

It will start in degraded mode using sdb5 only (assuming you have PSB !!)

 Will it recognize the failed partition

So long as the good partition is accessible and you are using PSBs,
it will read the configuration info and find that it is just sdb5.

 or (worst case) will it try to overwrite the data on sdb5 with sda5?

So long as youy are using PSBs, this should not happen.
sdb5 will have a later event count.

 BTW, what's the easiest way to replace the failed disk? Will something
 like "dd /dev/sda /dev/sdb count=1"

Yikes !!! this sounds hairy !

No blocksize ...
If the new sdb is identical to sda, it will set up the primary partitions and
the extended partitions, but it will not set up all the logical partitions.
I'd do it by hand, but if you are *SURE* they are the same, use sfdisk ...

 and "raidhotadd /dev/md0 /dev/sdb5" work?

Once the logical partition is set up, that should work.

 (assuming the old sdb becomes sda after the replacement)
(it depends on the SCSI ID and the SCSI bus used. I assume you would replace
sda with a new disk, and leave sdb ASIS, in which case it would remain sdb)



Re: Failed disk - how will it reboot?

2000-04-25 Thread Martin Bene

Hi Jochen,

At 10:46 25.04.00, Jochen Scharrlach wrote:
on a server one disk had a "medium error" and the RAID1 (2.2.14-B1)
disabled one of the mirrors. It looks like this:

md0 : active raid1 sdb5[1] sda5[0](F) 4739072 blocks [2/1] [_U]

If I reboot now, how will the system react? Will it recognize the
failed partition or (worst case) will it try to overwrite the data on
sdb5 with sda5?

Not a problem; it will stay in degraded mode, it won't damage the good data 
on  sdb. Is sda the disk you're booting the system from? if so: you sould 
have a bootdisk handy when replacing sda; you'll probably have a system 
thet doesn't want to boot off disk when plugging in a new sda (without 
playing arout with scsi IDs etc).

BTW, what's the easiest way to replace the failed disk? Will something
like "dd /dev/sda /dev/sdb count=1" and "raidhotadd /dev/md0 /dev/sdb5"
work? (assuming the old sdb becomes sda after the replacement)

You donÄt have to copy any data by hand. Install the new disk, partition 
and add the partition using raidhotadd. raid will do the synchronisation 
itself. I'd keep sdb as sdb and just put in a replacement for sda; however, 
you'll have to boot off a floppy if you do it this way.

Bye, Martin

"you have moved your mouse, please reboot to make this change take effect"
--
  Martin Bene   vox: +43-316-813824
  simon media   fax: +43-316-813824-6
  Andreas-Hofer-Platz 9 e-mail: [EMAIL PROTECTED]
  8010 Graz, Austria
--
finger [EMAIL PROTECTED] for PGP public key




Re: Problems again

2000-04-25 Thread danci

On Tue, 28 Mar 2000, Danilo Godec wrote:

 Well, the chassis is an Intel pre-installed rack mountable one with
 hot-swappable SCSI backplane. All the cables were there allready connected
 to disk racks. All I had to do was to install the disks in the racks and
 slide them in.

I finally managed to get myself to the server location where I found out,
that two big fans of the 19" rack enclosure failed (not only were they not
spinning properly, they were VERY hot, acting more like a hairdryer than
cooling fans).

I removed them and so far, I have 9 days of up-time without a single scsi
hickup. Of course, while being there I also replaced the scsi cable and
installed a new LVD terminator (instead of using the backplane's on-board
termination).

The other thing I noticed was that the AIC7xxx chip on the motherboard has
no cooling at all and it got quite worm even in the few minutes the
machine was on the workbench, so I decided to stick a passive cooler on it
when I get the next chance... Just for the sake of it. If this is a bad
idea, I'd like to know before I do it... :)


Thanks, D.





Re: raid tools problem

2000-04-25 Thread Matthew Leung

Dear Jason,

Thanks for the quick reply.
I've unmounted /dev/sda5, /dev/sdb5, /dev/sda6 and /dev/sdb6.  However, this 
time I got another problem.  Really no idea what's going on?  Any hints?

[root@web /etc]# /sbin/mkraid --really-force /dev/md0
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sda5, 16257748kB, raid superblock at 16257664kB
disk 1: /dev/sdb5, 16257748kB, raid superblock at 16257664kB
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.

==

From: Jason Lin [EMAIL PROTECTED]
To: Matthew Leung [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re: raid tools problem
Date: Tue, 25 Apr 2000 00:49:37 -0700 (PDT)

Make sure /dev/sda5 and /dev/sdb5 are not mounted when

"/sbin/mkraid /dev/md0" is issued.

--- Matthew Leung [EMAIL PROTECTED] wrote:
  Hi,
 
  I upgraded my kernel from 2.0.32 to 2.2.14 and using
  raidtools-0.90.  I
  don't know why I
  can't initialize the raid disk siucessfully.  Could
  you please let me know
  what's the
  problem?  Besides, is there any documents for
  troubleshooting of the
  raidtools?  Thanks
 
  [root@web /etc]# /sbin/mkraid /dev/md0
  handling MD device /dev/md0
  analyzing super-block
  disk 0: /dev/sda5, 16257748kB, raid superblock at
  16257664kB
  /dev/sda5 is mounted

Unmounted /dev/sda5.

  mkraid: aborted, see the syslog and /proc/mdstat for
  potential clues.
  ==
  This is the raidtab file:
 
  raiddev /dev/md0
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  4
 
  device  /dev/sda5
  raid-disk   0
 
  device  /dev/sdb5
  raid-disk   1
 
  raiddev /dev/md1
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  4
 
  device  /dev/sda6
  raid-disk   0
 
  device  /dev/sdb6
  raid-disk   1
 
  
  My partitions info:
 
  [root@web /etc]# df
  Filesystem 1024-blocks  Used Available
  Capacity Mounted on
  /dev/sda1 497667  275056   196909
  58%   /
  /dev/sda515618577   39193  14766497
  0%   /home
  /dev/sda6 8940174904   842927
  1%   /var
  /dev/sdb1 497667  195936   276029
  42%   /backup
  /dev/sdb6 8940174253   843578
  1%   /var2
  /dev/sdb5156185771907  14803783
  0%   /home2
 
 

  Get Your Private, Free E-mail from MSN Hotmail at
  http://www.hotmail.com
 
 

__
Do You Yahoo!?
Send online invitations with Yahoo! Invites.
http://invites.yahoo.com


Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com




Re: fsck'ing RAID's

2000-04-25 Thread Theo Van Dinter

On Mon, Apr 24, 2000 at 10:24:20PM +0200, Jakob Østergaard wrote:
 Resync shouldn't change what is read from the array, as it only rebuilds the
 parity -- the redunant information -- and doesn't affect the ``real'' data.

It depends on which RAID level and which disk fail.  In this case (RAID5),
you're going to have to rebuild both parity *AND* data (this isn't RAID
[2-4]...)  While the fsck should see the same information whether or not
a resync needs to occur, it's going to be *much* slower to fsck during
a resync than after the resync is completed.  (not to mention that the
fsck will have to recreate the data to check -- hopefully the rebuild
process will use this so it doesn't have to recreate the data twice.)

-- 
Randomly Generated Tagline:
"Capital punishment turns the state into a murderer. But imprisonment
 turns the state into a gay dungeon-master." - Emo Philips



Re: raid tools problem

2000-04-25 Thread dek_ml

"Matthew Leung" writes:
Dear Jason,

Thanks for the quick reply.
I've unmounted /dev/sda5, /dev/sdb5, /dev/sda6 and /dev/sdb6.  However, this 
time I got another problem.  Really no idea what's going on?  Any hints?

[root@web /etc]# /sbin/mkraid --really-force /dev/md0
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sda5, 16257748kB, raid superblock at 16257664kB
disk 1: /dev/sdb5, 16257748kB, raid superblock at 16257664kB
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.

Install the 2.2.14 kernel raid patch found at:

http://people.redhat.com/mingo/raid-patches/



My benchmarks

2000-04-25 Thread Douglas Egan

I have an Intel P-III running at 450 MHz on an Intel SE440BX-2
motherboard with patched 2.2.14 kernel raidtools 0.90

RAID-5 array consists of 3 "Maxtor 51536U3" drives.  One drive is master
on secondary motherboard IDE port. (no slave)
The other 2 are alone on the primary and secondary of a Promise Ultra
ATA/66.  The raid is configured as follows:

[root@porgy /proc]# cat mdstat
Personalities : [raid5] 
read_ahead 1024 sectors
md0 : active raid5 hdg1[1] hde1[0] hdc1[2] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hdg5[2] hde5[1] hdc5[0] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md2 : active raid5 hdg6[2] hde6[1] hdc6[0] 10004096 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
unused devices: none
[root@porgy /proc]# 

The following test was run on /dev/md2 mounted as /usr1

[degan@porgy tiobench-0.29]$ ./tiobench.pl --block 4096
No size specified, using 510 MB
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 51040961  21.63 10.4% 0.757 0.87% 19.62 17.1% 0.799
2.45%
   . 51040962  22.89 12.0% 0.938 0.90% 20.18 17.7% 0.792
2.38%
   . 51040964  21.85 12.5% 1.113 1.09% 20.37 18.1% 0.784
2.19%
   . 51040968  20.57 13.1% 1.252 1.34% 20.54 18.5% 0.776
2.34%


I am not sure what to make of the results and am happy with my RAID
operation.
I only post them FYI.


+-+
| Douglas EganWind River  |
| |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+



stability of 0.90

2000-04-25 Thread brian

I've been running raid1 (kernel 2.0, then 2.2) on a fileserver for over a
year now. I have suddenly seen the need to upgrade to raid0.90 after having
a powerfailure+UPS failure; I _need_ hot recovery (12GB takes about 2hrs to
recover with the current code!). How stable is 0.90? Under
people.redhat.com/mingo, the file is labeled "dangerous". But I can't use
the 2.2.11 code under kernel.org 'cause 2.2.11 has that nasty little TCP
memory leak bug

Thanks in advance,

Brian Jonnes
Init Systems - Linux consulting
(031) 765-5269  (082) 555-7737  [EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-04-25 Thread remo strotkamp

bug1 wrote:
 
 Clay Claiborne wrote:
 
  For what its worth, we recently built an 8 ide drive 280GB raid5 system.
  Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
  DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
  43MB/sec.
  The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
  are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
 
  Turning on DMA seems to be the key. Benchmarking the individual drives with
  HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
  and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
 
  That, and  enough processing power to see that paritity calc is not a
  bottleneck.
 
 
 Can you use your raid system with DMA turned on or do you get irq
 timouts like me ?


gonna jump on the bandwagon here...:-)

How often do you get them?? On all the disks, or just
some of them. In all the modes or just with ultra66???


i get some every couple of days, normally resulting in
the dma getting disabled for the specific drive...:-(

but the filesystems seem to be ok...


remo



RE: stability of 0.90

2000-04-25 Thread Gregory Leblanc

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 25, 2000 10:24 AM
 To: [EMAIL PROTECTED]
 Subject: stability of 0.90
 
 
 I've been running raid1 (kernel 2.0, then 2.2) on a 
 fileserver for over a
 year now. I have suddenly seen the need to upgrade to 
 raid0.90 after having
 a powerfailure+UPS failure; I _need_ hot recovery (12GB takes 
 about 2hrs to
 recover with the current code!). How stable is 0.90? Under

I've had no trouble with it, running a strip set (RAID 0) for about 4 months
now.  

 people.redhat.com/mingo, the file is labeled "dangerous". 
 But I can't use
 the 2.2.11 code under kernel.org 'cause 2.2.11 has that nasty 
 little TCP
 memory leak bug

All the RAID code is "dangerous" even the old 0.40 stuff.  The 2.2.11 patch
works all the way up to 2.2.13, for 2.2.14 you need Ingo's patch from the
above site.  RAIDtools-0.90 is the version you want.
Greg



PowerEdge 2450

2000-04-25 Thread Leung Yau Wai

Dear all,

I think this message should be off-topic.  But I really can't find
any resource in Internet about installing RH6.2 Linux in a Dell PowerEdge
2450 2U server with Hardware RAID Enabled (PERC 3/Si).

First the Dell server can detect 4 SCSI HD in the RAID controller.
Howerver, the aic-7899 can't detect the RAID (logical drive) during system
doing POST.

Of course, I can't even make the system detect the PERC controller
when installing RH 6.2 even I have use the expert mode and insert the
driver for the controller.

Sorry for off-topic.. But I really can't find anywhere to ask for
help!...


Thanks all of you!

Chris





Re: stability of 0.90

2000-04-25 Thread Jakob Østergaard

On Tue, 25 Apr 2000, [EMAIL PROTECTED] wrote:

 I've been running raid1 (kernel 2.0, then 2.2) on a fileserver for over a
 year now. I have suddenly seen the need to upgrade to raid0.90 after having
 a powerfailure+UPS failure; I _need_ hot recovery (12GB takes about 2hrs to
 recover with the current code!). How stable is 0.90?

``very''.

It's more stable than the old code ever was. I'm surprised you even succeeded
running the old code with a raid level that has redundancy, I could never
make that work under heavy load.

The 0.90 code is in use a lot of places. I have 7-8 systems or so running with
various levels (all but RAID-4 actually), and it's rock solid.

 Under
 people.redhat.com/mingo, the file is labeled "dangerous". But I can't use
 the 2.2.11 code under kernel.org 'cause 2.2.11 has that nasty little TCP
 memory leak bug

Stay away from the ``dangerous'' code.

Use Ingo's patch for 2.2.14 at the URL you mentioned.  It works with 2.2.14 and
2.2.15(pre-something).

2.2.15pre-X and the 2.2.14 RAID patch is a nice couple.  Look out for
rejects when you patch. You will most likely have to fix one small reject
in raid1.c, but it shouldn't be much of a problem I guess.

-- 

: [EMAIL PROTECTED]  : And I see the elder races, :
:.: putrid forms of man:
:   Jakob Østergaard  : See him rise and claim the earth,  :
:OZ9ABN   : his downfall is at hand.   :
:.:{Konkhra}...:



Re: celeron vs k6-2

2000-04-25 Thread James Manning

[Seth Vidal]
  I did some tests comparing a k6-2 500 vs a celeron 400 - on a raid5
 system - found some interesting results
 
 Raid5 write performance of the celeron is almost 50% better than the k6-2.

Can you report the xor calibration results when booting them?

 Is this b/c of mmx or b/c of the FPU?

FPU should never get involved (except the FPU registers getting used
during MMX operations).

As per Greg's report of the K6-2 having MMX instructions, remember
that a chip having instructions doesn't mean they get used.  Again,
this is something that the xor calibrations should help show, though.

MTRR could certainly be another source of additional performance, but I
haven't dealt with the K6-2 in any capacity so I don't even know whether
it has that capability (although I haven't personally heard of anything
not based on the P6 core using MTRR)

 I used tiobench in sizes of  than 3X my memory size on both systems -
 memory and drives of both systems were identical.

If possible, let the resync's finish before testing... this can cause a
huge amount of variance (that I've seen in my testing).  speed-limit down
to 0 doesn't appear to help, either (although the additional seeks to
get back to the "data" area from the currently resyncing stripes could
be the base cause)

When looking from a certain realistic POV, it'd be hard to believe that
even a P5 couldn't keep up with the necessary XOR operations... is
there anything else on the system(s) fighting for CPU time?

James



Re: celeron vs k6-2

2000-04-25 Thread Seth Vidal

  Raid5 write performance of the celeron is almost 50% better than the k6-2.
 
 Can you report the xor calibration results when booting them?
sure I should be able to pull that out of somewhere
from the k6-2:
raid5: MMX detected, trying high-speed MMX checksum routines
   pII_mmx   :  1121.664 MB/sec
   p5_mmx:  1059.561 MB/sec
   8regs :   718.185 MB/sec
   32regs:   501.777 MB/sec
using fastest function: pII_mmx (1121.664 MB/sec)


 If possible, let the resync's finish before testing... this can cause a
 huge amount of variance (that I've seen in my testing).  speed-limit down
 to 0 doesn't appear to help, either (although the additional seeks to
 get back to the "data" area from the currently resyncing stripes could
 be the base cause)
I did both tests just about identically.


 When looking from a certain realistic POV, it'd be hard to believe that
 even a P5 couldn't keep up with the necessary XOR operations... is
 there anything else on the system(s) fighting for CPU time?
no.
they were blanked - i didn't put them into runlevel 1 but I did shut
down everything I could.

they were pretty low load.

-sv





Re: stability of 0.90

2000-04-25 Thread Michael

 On Tue, 25 Apr 2000, [EMAIL PROTECTED] wrote:
 
  I've been running raid1 (kernel 2.0, then 2.2) on a fileserver for over a
  year now. I have suddenly seen the need to upgrade to raid0.90 after having
  a powerfailure+UPS failure; I _need_ hot recovery (12GB takes about 2hrs to
  recover with the current code!). How stable is 0.90?
 
 ``very''.
 
 It's more stable than the old code ever was. I'm surprised you even
 succeeded running the old code with a raid level that has
 redundancy, I could never make that work under heavy load.
 

You did not try hard enough. I've run 0.42 on both IDE and scsi 
root raid5 since it was first available. I still have one customer 
that has been running 12 gig 3 - disk root raid5 ide system with 0.42 
tools on a 2.0x kernel for a couple of years in a graphics service 
bureau as a file server. i.e. -- lots of big files, lots of traffic. 
Theyve had a hard disk failure -- survived! -- and numerous stupid 
shut downs without dismount of raid (they just turned off the power), 
but the system has ups+fail detect and has successfully run for a 
long time with power outages, restarts, etc... Never lost any data. 
Slackware ~3.x low number.

Michael
[EMAIL PROTECTED]



Re: celeron vs k6-2

2000-04-25 Thread Stephen Waters

early stepping K6-2s did not have an MTRR. later steppings do (i believe 
stepping 8 was the first one to have an MTRR... but i can't say for 
certain):

my cpu:

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 5
model   : 8
model name  : AMD-K6(tm) 3D processor
stepping: 0
cpu MHz : 300.689223
fdiv_bug: no
hlt_bug : no
sep_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr mce cx8 sep mmx 3dnow
bogomips: 599.65

this guy's http://www.tux.org/hypermail/linux-kernel/1999week21/0052.html
cpu:

processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 8
model name : AMD-K6(tm) 3D processor
stepping : 12
cpu MHz : 350.810582
fdiv_bug : no
hlt_bug : no
sep_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr mce cx8 sep mtrr pge mmx 3dnow
bogomips : 699.60


James Manning wrote:

 MTRR could certainly be another source of additional performance, but I
 haven't dealt with the K6-2 in any capacity so I don't even know whether
 it has that capability (although I haven't personally heard of anything
 not based on the P6 core using MTRR.




Re: performance limitations of linux raid

2000-04-25 Thread Paul Jakma

On Mon, 24 Apr 2000, Frank Joerdens wrote:

  I've been toying with the idea of getting one of those for a while, but
  there doesn't seem to be a linux driver for the FastTrack66 (the RAID
  card), only for the Ultra66 (the not-hacked IDE controller), and that
  driver has only 'Experimental' status with current production kernels:


Clue: the Promise IDE RAID controller is NOT a hardware RAID
controller.

Promise IDE RAID == Software RAID where the software is written by
Promise and sitting on the ROM on the Promise card getting called by
the BIOS.
  

  I even wrote to Promise to ask when or if a linux driver might become
  available, but didn't get much of an answer (they replied that there
  was a driver available for the Ultra, although I had specifically asked
  for the RAID card).

Dont bother... A driver for Promise RAID == Software RAID. You
already ahve software RAID in linux 
  
  If anyone hears about a Linux driver for this card, I'd like to know.
  
  Cheers, Frank
  
  

-- 
Paul Jakma  [EMAIL PROTECTED]
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
I use not only all the brains I have, but all those I can borrow as well.
-- Woodrow Wilson




Re: celeron vs k6-2

2000-04-25 Thread Seth Vidal

 early stepping K6-2s did not have an MTRR. later steppings do (i believe 
 stepping 8 was the first one to have an MTRR... but i can't say for 
 certain):
 
 my cpu:
 
 processor   : 0
 vendor_id   : AuthenticAMD
 cpu family  : 5
 model   : 8
 model name  : AMD-K6(tm) 3D processor
 stepping: 0
 cpu MHz : 300.689223
 fdiv_bug: no
 hlt_bug : no
 sep_bug : no
 f00f_bug: no
 coma_bug: no
 fpu : yes
 fpu_exception   : yes
 cpuid level : 1
 wp  : yes
 flags   : fpu vme de pse tsc msr mce cx8 sep mmx 3dnow
 bogomips: 599.65
 

important flags from my cpu:

flags   : fpu vme de pse tsc msr mce cx8 sep mtrr pge mmx 3dnow


interesting.
mtrr is there

so maybe its motherboard quality.

-sv





Re: performance limitations of linux raid

2000-04-25 Thread Daniel Roesen

On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
 Clue: the Promise IDE RAID controller is NOT a hardware RAID
 controller.
 
 Promise IDE RAID == Software RAID where the software is written by
 Promise and sitting on the ROM on the Promise card getting called by
 the BIOS.

Clue: this is the way every RAID controller I know of works these days.


PS: Linux doesn't use BIOS to access devices.



RE: performance limitations of linux raid

2000-04-25 Thread Gregory Leblanc

 -Original Message-
 From: Daniel Roesen [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 25, 2000 3:07 PM
 To: [EMAIL PROTECTED]
 Subject: Re: performance limitations of linux raid
 
 
 On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
  Clue: the Promise IDE RAID controller is NOT a hardware RAID
  controller.
  
  Promise IDE RAID == Software RAID where the software is written by
  Promise and sitting on the ROM on the Promise card getting called by
  the BIOS.
 
 Clue: this is the way every RAID controller I know of works 
 these days.

Then you've never used a RAID card.  I've got a number of RAID cards here, 2
from compaq, 1 from DPT, and another from HP (really AMI), and all of them
implement RAID functions like striping, double writes (mirroring), and
parity calculations for RAID4/5 in firmware, using an onboard CPU.  All the
controllers here are i960 based, but I've heard that the StrongARM procs are
much faster at parity caclculations.  The controllers that I've used that
are software are the Adaptec AAA series boards.  The other one that I know
of is this Promise thing.
Greg



Re: performance limitations of linux raid

2000-04-25 Thread Drake Diedrich

On Mon, Apr 24, 2000 at 09:13:20PM -0400, Scott M. Ransom wrote:
 
 Then I moved back to kernel 2.2.15-pre18 with the RAID and IDE patches
 and here are my results:
 
   RAID0 on Promise Card 2.2.15-pre18 (1200MB test)
 --
  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
   6833 99.2 42532 44.4 18397 42.2  7227 98.3 47754 33.0 182.8  1.5
 **
 
 When doing _actual_ work (I/O bound reads on huge data sets), I often
 see sustained read performance as high as 50MB/s.
 
 Tests on the individual drives show 28+ MB/s.

   What stripe size, CPU and memory is used here?  I have a similar setup
(2.2.15pre19, IDE+RAID patches), 4 master IDE Deskstars, on board VIA and
offboard Promise controllers, and K6-2/500  256M ram and see 22MB/s native
but a max of 28 MB/s for all stripes.  dd/hdparm -t on multiple drives
simultaneously appears to show complete contention between the separate
chains.

hdparm -t /dev/hde
19.16 MB/sec

( hdparm -t /dev/hde ) ; (hdparm -t /dev/hdg )
11.43 MB/sec
10.47 MB/sec

   With all four drives throughput per drive is less than 7MB/sec. With
RAID0 across all four drives I get 28 MB/sec according to bonnie, vs 22
MB/sec on single drives.  I had been attributing that to a singly-entrant
IDE driver in 2.2, but your results make me think there's some other reason
I don't see linear speedups.  Is this a dual CPU system perhaps?  Something
unusual about the interrupt handling?  UDMA33 vs. UDMA 66 (I'm using 40
conductor cables, perhaps I need the 80s)?



Re: performance limitations of linux raid

2000-04-25 Thread bug1

remo strotkamp wrote:
 
 bug1 wrote:
 
  Clay Claiborne wrote:
  
   For what its worth, we recently built an 8 ide drive 280GB raid5 system.
   Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
   DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
   43MB/sec.
   The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
   are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
  
   Turning on DMA seems to be the key. Benchmarking the individual drives with
   HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
   and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
  
   That, and  enough processing power to see that paritity calc is not a
   bottleneck.
  
 
  Can you use your raid system with DMA turned on or do you get irq
  timouts like me ?
 
 gonna jump on the bandwagon here...:-)
 
 How often do you get them?? On all the disks, or just
 some of them. In all the modes or just with ultra66???
 
 i get some every couple of days, normally resulting in
 the dma getting disabled for the specific drive...:-(
 
 but the filesystems seem to be ok...
 
 remo

Yea, i get them on my disks (Quantum XA and KX, both my IBM DPTA
372050), they usually start at the last ide channel and work backwards.

In the last 24 hours ive been getting them when e2fsck runs after
rebooting. Usual cause of rebooting is irq causeing lockup, or endlessly
trying looping trying to get an irq.

Im convinced its my hpt366 controller, ive mentioned my problem in a few
channels, no luck yet.

I used to think it was the raid code, but i get it with lvm as well, it
happens more often from reading than writting via the HPT366, the more
load placed on the controller the more likely it is to lockup, one drive
by itself it just losses interrupts (sometimes it can recover), if use
three or four channels, using both my onboard hpt366 and my pci card it
locks up hard in a fraction of a second

I want to try and work this problem out, im not a kernel hacker though.

Anyone have any advice on how to get into kernel debugging, i know c, i
dont know the kernel though. I know how to use ksymoops thats about it.

Glenn



can't locate module block-major-22

2000-04-25 Thread Jason Lin

Hi there:

After my raid-1 is up and running I shutdown the
machine and took out one hard disk.(the one without
Linux installed.) Just to see how it behaves.
During reboot it drops to single user mode due to RAID
device error.

"raidstart /dev/md0"
modprobe: can't locate module block-major-22
/dev/md0: invalid argument

Is this normal?
I was hoping the raid-1 would run in degraded mode,
using /dev/hda7 only(which has Linux installed).

Thanks.

J.

cat /etc/raidtab
# Config file for raid-1 device.


raiddev/dev/md0
raid-level 1
nr-raid-disks  2
nr-spare-disks 0
chunk-size 4
persistent-superblock  1

  device  /dev/hdc7
  raid-disk   0

  device  /dev/hda7
 raid-disk   1


raiddev/dev/md1
raid-level 1
nr-raid-disks  2
nr-spare-disks 0
chunk-size 4
persistent-superblock  1  

__
Do You Yahoo!?
Send online invitations with Yahoo! Invites.
http://invites.yahoo.com



Re: performance limitations of linux raid

2000-04-25 Thread Scott M. Ransom

 
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

   What stripe size, CPU and memory is used here?

System is a dual-cpu PII 450Mhz with 256MB RAM.
Disks are configured with chunk-size of 32kb (ext2 block-size is 4kb).

 Is this a dual CPU system perhaps?  Something

Yes.  See above.

 unusual about the interrupt handling?  UDMA33 vs. UDMA 66 (I'm using 40
 conductor cables, perhaps I need the 80s)?

I am using UDMA66.  When testing against UDMA33, I found a 10-15% speed
difference.  To get that speed increase you must have the UDMA66
cables...

Scott

-- 
Scott M. Ransom   
Phone:  (781) 320-9867 Address:  75 Sanderson Ave.
email:  [EMAIL PROTECTED]   Dedham, MA  02026
PGP Fingerprint: D2 0E D0 10 CD 95 06 DA  EF 78 FE 2B CB 3A D3 53



Re: PowerEdge 2450

2000-04-25 Thread cprice



On Wed, 26 Apr 2000, Leung Yau Wai wrote:

 Dear all,
 
   I think this message should be off-topic.  But I really can't find
 any resource in Internet about installing RH6.2 Linux in a Dell PowerEdge
 2450 2U server with Hardware RAID Enabled (PERC 3/Si).

I believe it was posted here a couple of days ago that DELL is
providing a binary only driver for the PERC3 embedded controller for the
2400/2450 440/4450 series Poweredge servers. I have not seen this myself,
but I would goto Dell's support section and do a listing for drivers by
your system type - I just checked and see that there is a PERC3 driver for
redhat6.2 there. 

 
   First the Dell server can detect 4 SCSI HD in the RAID controller.
 Howerver, the aic-7899 can't detect the RAID (logical drive) during system
 doing POST.
 

AFAIK, the aic-7899 should NOT see the logical drive, as it is
controlled directly by the PERC3 raid controller. 


   Of course, I can't even make the system detect the PERC controller
 when installing RH 6.2 even I have use the expert mode and insert the
 driver for the controller.
 
You *are* using the driver from dell for the perc3?


Chris