Re: raid metadata version - grub-pc 1.99-27+deb7u1 (grub2 wheezy)

2013-08-18 Thread Georgi Naplatanov

On 08/16/2013 06:18 AM, Karl Schmidt wrote:

Back in 2011-sep-29 There was a problem booting from a raid partition
with raid mdam metadata version 1.2.  There was a workaround - but I'm
trying to update some documentation for wheezy.

It looks like this has not been dealt with?

At issue is working with modern drives that are now larger than 2TB that
require a gpt partition.

I think that currently this still is rather messy.

( Best work around - put your system on a pair of raided SSD using 0.90
metadata and boot off of these smaller drives - plant your large drive
at /home Or where ever ).

Does anyone know if this has changed?




Karl Schmidt  EMail k...@xtronics.com
Transtronics, Inc.  WEB
http://secure.transtronics.com
3209 West 9th Street Ph (785) 841-3089
Lawrence, KS 66049  FAX (785) 841-0434

Socialism is just another form of stealing.
Government takes, by the threat of force,
what isn't rightfully theirs to take.
kps






I confirm that software RAID 1 with GPT and metadata version 1.2 is 
bootable on :


- Squeeze grub-pc (amd64)
- Wheezy grub-pc(amd64) and grub-efi(amd64).

Best regards
Georgi


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5210ea0f.3040...@oles.biz



Re: raid + lvm setup

2006-12-12 Thread Daniel Tryba
On Sat, Dec 09, 2006 at 08:11:00PM -0500, Douglas Tutty wrote:
[snip]
> Does this seem like a workable/wise plan or here there be dragons?  Is
> there any reason to think that 20 GB is too small for a fully installed
> workstation including swap and /tmp (everything but /home)?

Sounds sane, except for the encrypted stuff this resembles my setup,
20Gb is more then enough (I personally have something like 7Gb used from
the max 24Gb for base (about half of it in the /usr/local/ hierarchy)).

On the last reinstall of a machine (debinstaller for x86) a couple of
months ago I tried the encypted LVM stuff but it didn't appear to work
out of the box (and I was to lame to spend some time to get is working)
though.

Encrypted swap with a new generated key (never stored) for each reboot
sound interesting. Could you share your experiences if you have this
running?

-- 

 When you do things right, people won't be sure you've done anything at all.

   Daniel Tryba


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-21 Thread Lennart Sorensen
On Sat, Nov 19, 2005 at 03:39:22PM +0100, Goswin von Brederlow wrote:
> Don't you run the risk of having to swap something out while pvmove
> has locked down the lvm then? Or other binaries that lock.

Never had a problem yet.  Of course I have never used pvmoce either.
Could always swapoff before doing anything lvm low level.  Normal
operation is certainly fine as lvm is just another block device layer as
far as swap is concerned.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-19 Thread Goswin von Brederlow
[EMAIL PROTECTED] (Lennart Sorensen) writes:

> On Fri, Nov 18, 2005 at 03:45:52PM +0100, Goswin von Brederlow wrote:
>> No point in that. Put / (including /boot) on raid1. Do the same for
>> swap. And then put everything else on lvm on raid5 (/var, /usr, /home,
>> /mnt/data, ...).
>
> I put swap on lvm.  Although I run lvm on raid1 (only 2 disks).
>
> Len Sorensen

Don't you run the risk of having to swap something out while pvmove
has locked down the lvm then? Or other binaries that lock.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-18 Thread Lennart Sorensen
On Fri, Nov 18, 2005 at 03:45:52PM +0100, Goswin von Brederlow wrote:
> No point in that. Put / (including /boot) on raid1. Do the same for
> swap. And then put everything else on lvm on raid5 (/var, /usr, /home,
> /mnt/data, ...).

I put swap on lvm.  Although I run lvm on raid1 (only 2 disks).

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-18 Thread Lennart Sorensen
On Wed, Nov 16, 2005 at 11:28:41AM -0800, lordSauron wrote:
> I have a ECS Elitegroup nForce 3-A.  I'll google for it to see if the
> nvRaid is (s||h)w but that's the hardware.

nvraid is fakeraid (aka bios software raid).

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-18 Thread Goswin von Brederlow
Albert Dengg <[EMAIL PROTECTED]> writes:

> Albert Dengg wrote:
>
>> Lionel Elie Mamane wrote:
>>
>>> Yes, but I just configure it to use one of the discs and dynamically
>>> switch it to the other one if the disc goes bad.
>>>
>>>
>>>
>> whats the difference with other raid 0/4/5/6?
>> you have to write to data to the bootsector of the discs that
>> physically is bootet..
>> (ok with level 0 you can skip writing to all discs since if one is
>> shot you have a non functioning array  anyway)
>
> 
> ah sorry just an error in thought...
> but: whats the point of having /boot on raid0?
> and for 5/6: the amount of space wasted is maginal
>
> yours
> Albert

No point in that. Put / (including /boot) on raid1. Do the same for
swap. And then put everything else on lvm on raid5 (/var, /usr, /home,
/mnt/data, ...).

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-18 Thread Goswin von Brederlow
lordSauron <[EMAIL PROTECTED]> writes:

> okay... so, let's just say that hypothetically I choose to be a little
> less than intelligent and try to use the motherboard built-in software
> RAID.  Would that work?
>
> Also, I'm a little skeptical of Linux's software RAID being better
> (ie. faster) than my motherboards.  Is there any statistics I can look
> at in regard to this?

The motherboard raid has to be handled by a special software raid
driver inside linux. That means you are getting the exact same
performance as normal software raid but you are stuck with the
controler and highly specific metadata. Yu can't plug the disk into a
different controler and have it work there anymore.

Also the way the motherboard raid organises its data might be less
optimal than what normal software raid does. At a minimum it will be
less flexible. Linux software raid might be faster or slower but it is
tuneable to your needs. The motherboard raid will be static.

> Once more, thanks for your considerable patience in dealing with me -
> someday I might evolve into a semi-respectable linux sub-guru...

It is strongly recommended not to use the motherboard raids on any
board. It just has no advantage but drawbacks.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-17 Thread Nicholas P. Mueller
That (nvraid) is definitely software raid.  Every built-in RAID  
controller on *consumer grade* hardware I have ever seen is software  
raid, including controllers/chips by silicon image, marvel, broadcom  
and nvidia controllers.


In such a case, linux software raid is definitely the way to go.

NPM

On Nov 16, 2005, at 1:28 PM, lordSauron wrote:


I have a ECS Elitegroup nForce 3-A.  I'll google for it to see if the
nvRaid is (s||h)w but that's the hardware.




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-16 Thread lordSauron
I have a ECS Elitegroup nForce 3-A.  I'll google for it to see if the
nvRaid is (s||h)w but that's the hardware.



Re: RAID

2005-11-16 Thread Helge Hafting

lordSauron wrote:


okay... so, let's just say that hypothetically I choose to be a little
less than intelligent and try to use the motherboard built-in software
RAID.  Would that work?

Also, I'm a little skeptical of Linux's software RAID being better
(ie. faster) than my motherboards.  Is there any statistics I can look
at in regard to this?
 


It depends on _how_ the motherboard does RAID.  If the motherboard
has a true RAID controller that does raid in hardware - then sure,
it may beat linux software raid.  And it may not - linux certainly beat some
not so good hw raid controllers.

However if the motherboard "raid" merely is a stupid bios sw raid
then linux will perform much better.  How to know?  If windows need
a driver (supplied with the board) to use raid on that controller,
but no driver if the controller uses only one disk - then you have
one of those common fake raid controller.  Another way to know - if
you have a real hw raid controller, then it was expensive too. 


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-12 Thread lordSauron
On 11/12/05, Anthony DeRobertis <[EMAIL PROTECTED]> wrote:
> Lennart Sorensen wrote:
> > Even on a fileserver it is becoming hard to have your network link
> > outperform your disk IO unless you have a very good gigabit or better
> > network link.  Any single modern disk can saturate a 100Mbit link.

> This is not always true. If you have a lot of seeks (e.g., because
> people are accessing hundreds of large files simultaneously) disks start
> to become a bottleneck.

In that case you really should be distributing your data over more
places.  For my networks, I perfer to use NA Storage to keep most of
the I/O load off the CPU - plus I can then have many servers point to
one drive without big hairy networking problems.

Do I have a network like this?  No... but that's the network I'm about
to build for my devgroup.  So, all that I just said is in theory -
feel free to let the heavy hammer of experience drop straight down on
me, since I probably need it.

Oh, and this thread is - by now - officially Off-Topic.  If someone
really doesn't like all this OT traffic, tell me to shut up and I
will.  But until then I'm assuming that no-one really minds this
*that* much.



Re: RAID

2005-11-12 Thread Anthony DeRobertis
Lennart Sorensen wrote:

> Even on a fileserver it is becoming hard to have your network link
> outperform your disk IO unless you have a very good gigabit or better
> network link.  Any single modern disk can saturate a 100Mbit link.

This is not always true. If you have a lot of seeks (e.g., because
people are accessing hundreds of large files simultaneously) disks start
to become a bottleneck.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-12 Thread Anthony DeRobertis
Corey Hickey wrote:

> Compared to constant operation, it would certainly be worse to turn a
> hard drive on and off with a period of one minute, and it would be
> better with a period of one year; I just don't know know at what point
> on the continuum once per day lies. Is it better or worse? Since this
> thread is somewhat off-topic anyway, does anyone care to venture a guess?

I'd guess that desktop hard drives are designed to be spun up and down
many, many times; consider, for example, the effects of power saving
which can easily spin down and up the drive multiple times per day.

I also seem to remember at least one vendor of desktop drives raising a
stir by adding being off eight hours per day into their warranty terms.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-11 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 11:44:26AM -0800, lordSauron wrote:
> So with RAID 1 I'll get write-speeds like I only have one drive with
> no RAID, but with RAID 0 I'll get write speeds like I have umm... RAID
> 0...  but the read speeds are (effectively) the same, correct?

Well raid0 reads twice as fast since you only read half the data from
each drive, although theoretically you can make raid1 read half the data
from each disk too.  Not sure if this is usually done or not.

Raid0 certainly does write faster.

> True...  makes me wonder about the frequency of drive faliures.  Those
> who have experience: how often does that happen?  I've never had a
> drive fail in my life, but I'm your normal desktop user so my PC is
> off for about 8-16 hours a day.
> 
> Dang it, fate would have it that I should leave now, but I shall
> return (sometime later today, if fate does not conspire against me...)
> 
> Thanks for all your great help and I hope that I'll someday be able to
> repay you all in kind...

Well I have had many IBM hotswap scsi disks fail (9, 18 and 36G drives)
when I did sysadmin work a couple of years ago.  I was quite sick of
them really.

I have in the past seen many WD drives fail that were 500-1300M in size,
while I haven't had problems with current WD drives.  I have had an 80G
maxtor die without 8 hours of being installed in a machine (so it was
hardly even in use yet and no data was lost of course).  I have had a
bunch of IBM 20G drives fail (part of that series involving class action
lawsuits as far as I can tell).

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-11 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 11:30:51AM -0800, lordSauron wrote:
> On 11/10/05, Albert Dengg <[EMAIL PROTECTED]> wrote:
> > see http://en.wikipedia.org/wiki/Redundant_Array_of_Independent_Disks
> 
> I just got to read that most excellent article and it raised one
> question about the theoretical configuration:
> 
> RAID 1 or 0?  It said that 0+1 would mirror two RAID 0 arrays - very
> useful, if I only had 4 disks...  So, since I'm only getting two, I
> just wondered whether RAID 1 really has the ability to give
> performance increase similar to RAID 0.  Since I'm going to get two 80
> Gb SATA150s, I  don't want to sacrifice performance, since eventually
> the machine will become my webserver (yay!) though I'm still not sure
> I'll need that much performance... but nevertheless it doesn't hurt to
> be prepared.
> 
> So in more clear and less confused words, does RAID 1 really share the
> advantages in speed of RAID 0?  I'm just a little skeptical, and I'd
> like to know if there's any testimonies of people that have actually
> had the chance to find out.

raid1 has the same performance as a single disk, but with added
reliability since you store identical data on both.  now a web server
probably doesn't need more speed than the single disk or raid1 can give
since the bottleneck is almost always the network link, and not disk
speed, or sometimes the cpu if you are doing a lot of scripting.

Even on a fileserver it is becoming hard to have your network link
outperform your disk IO unless you have a very good gigabit or better
network link.  Any single modern disk can saturate a 100Mbit link.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread lordSauron
On 11/10/05, Corey Hickey <[EMAIL PROTECTED]> wrote:
> lordSauron wrote:
> > True...  makes me wonder about the frequency of drive faliures.  Those
> > who have experience: how often does that happen?  I've never had a
> > drive fail in my life, but I'm your normal desktop user so my PC is
> > off for about 8-16 hours a day.
>
> It might actually be more stressful on a hard drive to power it on and
> off once per day than it would be to just leave it on 24/7. Accelerating
> the platters requires a lot more work from the motor than simply
> maintaining constant velocity.

You've got a valid point, however, in a PC that doesn't have proper
cooling, continuous operation could become very fatal to the drive's
health.  I love my massive ATX full tower because of the good airflow
I can get through it (I built it myself - the fans suck air through
the front grille and shove the hot out the back) and the large volume
of the case allows for easier heat dissapation into the air...

All the drive failures I've heard of were for drives that were always
on, save one that was in a PC that had almost no cooling whatsoever. 
That's where my concerns come from.

> Compared to constant operation, it would certainly be worse to turn a
> hard drive on and off with a period of one minute, and it would be
> better with a period of one year; I just don't know know at what point
> on the continuum once per day lies. Is it better or worse? Since this
> thread is somewhat off-topic anyway, does anyone care to venture a guess?

Things tend to break in a chage of state.  The shuttle never blows up
while orbiting; it's only during takeoff or landing.  Hard drives - to
the best of my knowledge - die either at startup (since shutdown is a
piece of cake - retract the heads and let the platters spin down,
right?) and when temperature wears down their sensitive construction,
and I've heard of a few problems with altitude (there was a case of an
observatory being limited to a handful of drives since the others seal
wasn't really airtight - just think of heads skidding across the disks
due to insufficient air to float the heads).

Both sound like a prime time for the drives to fail, though it sounds
like the first - startup and shutdown - aren't really applicative to
me, since I turn my box on once per day and then leave it there for a
few hours until I stop for the night.  Heating is a problem I monitor
somewhat carefully - I'm not nuerotic about it, but I don't turn a
blind eye either.

> 4. Toshiba 40GB 4200RPM laptop drive. When I bought a used laptop I
> figured I'd have to replace the hard drive. A month later I got
> DriveReady SeekComplete errors. Over a few days, the errors got more and
> more frequent.

IMHO, laptop HDDs are prime targets for drive failures.  Due to
battery power constraints they're turned on and off more often than
any other type of drive, and they have to deal with a lot more
enviormental monkey business than your average box-on-the-desk.  Think
about airplanes, sudden jolts... the list just goes on and on...  it's
a wonder they don't break more often!

> In all, that's about 1/8 of all the drives I've owned or operated. I've
> also seen 5 or 6 other people's hard drives fail. 3 of those were IBM
> Deathstar 60GXPs.

he he he... your typo was funny.

I currently run off of an IBM DeskStar 27.x gig HDD, though I hope to
do away with it soon with new SATA drives to take advantage of the
faster nature of SATA150 technology.

To me it also sounds like Linux is slightly more demanding of the
drives than is windows.  Then again, I will never go back from Samba -
I actually got to *use* *all* of the 100 Mbps in my network for the
first time!  It was beautiful...  600 megs in ~15 seconds...  
Linux actually uses the system resources.  It's no wonder why ~50% of
all servers run linux - nothing else is cost-effective!

--
=== GCB v3.1 ===
GCS d-(+) s+:- a? C+() UL+++() P L++(+++)
E- W+(+++) N++ w--- M>++ PS-- PE Y+ PGP- t++(+++) 5?
X? R !tv>-- b++> DI+++> D-- G !e h(*) !r x---
=== EGCB v3.1 ===



Re: RAID

2005-11-10 Thread Corey Hickey
lordSauron wrote:
> True...  makes me wonder about the frequency of drive faliures.  Those
> who have experience: how often does that happen?  I've never had a
> drive fail in my life, but I'm your normal desktop user so my PC is
> off for about 8-16 hours a day.

It might actually be more stressful on a hard drive to power it on and
off once per day than it would be to just leave it on 24/7. Accelerating
the platters requires a lot more work from the motor than simply
maintaining constant velocity.

Compared to constant operation, it would certainly be worse to turn a
hard drive on and off with a period of one minute, and it would be
better with a period of one year; I just don't know know at what point
on the continuum once per day lies. Is it better or worse? Since this
thread is somewhat off-topic anyway, does anyone care to venture a guess?

I've had four IDE drives die on me over the years:

1. Western Digital 3.2GB 5400RPM. This survived about 2 years of mixed
usage. At first it was power-cycled often, but later on I left it on
more of the time. The failure was a sudden head crash.

2. Maxtor 30GB 5400RPM. This one died after only a few months with a
sudden head crash (it got very hot, too). I replaced it with a 1 year
old Maxtor 27GB 7200RPM, and it's still working now after 4 years of
constant operation.

3. Western Digital 120GB 7200RPM. This drive gradually got very noisy,
and eventually Linux started reporting DriveReady SeekComplete errors.
Over a few days, these errors got more and more frequent.

4. Toshiba 40GB 4200RPM laptop drive. When I bought a used laptop I
figured I'd have to replace the hard drive. A month later I got
DriveReady SeekComplete errors. Over a few days, the errors got more and
more frequent.

In all, that's about 1/8 of all the drives I've owned or operated. I've
also seen 5 or 6 other people's hard drives fail. 3 of those were IBM
Deathstar 60GXPs.

-Corey


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Albert Dengg
On Thu, Nov 10, 2005 at 11:19:30AM -0800, lordSauron wrote:
> On 11/10/05, Colin Baker <[EMAIL PROTECTED]> wrote:
> > For what you would end up spending on such a card, you're probably
> > better off just buying a faster hard drive.
> 
> So I'm really going to be in the neighborhood of ~$100 USD or more for
> the card alone?
> 
> Well, a faster HDD won't help much, since I can only go up to SATA150
> without buying a better controller card.  However, if I can RAID 0
> some cheap drives together, that'd be great, since I could have a
> bigger drive and faster.
well...todays hdd's arent as fast as the bus...
so there are for example the Western Digital Raptor drives which are
faster than some other drives...
and also the the controller makes a difference (not only for raid, also
for single disc performance)
as far as my expierence is with raid0 with 2 40GB seagate ide hdd's it
desnt make that mauch of a difference for random reading small files,
only for big files (viedos, ...)...

> So, if I were to get two SATA150 drives, plug 'em into my box, and
> shove in the Debian Sarge amd64 install disk, how easy would the
> install be?  Could a dolt like me (as in not terribly command-line
> savvy) get through with only sub-fatal injuries?
last time i checked there was a option in d-i (at least in expert mode)
for settion up ad linux software raid...so it should be such a problem
(havnt tried it though, i use raid1 only for my homedir and some
important data...irrelevant at install time

yours
Albert

-- 
Albert Dengg <[EMAIL PROTECTED]>


signature.asc
Description: Digital signature


Re: RAID

2005-11-10 Thread lordSauron
On 11/10/05, Bernd Petrovitsch <[EMAIL PROTECTED]> wrote:
> On Thu, 2005-11-10 at 11:30 -0800, lordSauron wrote:
> [...]
> > So in more clear and less confused words, does RAID 1 really share the
> > advantages in speed of RAID 0?  I'm just a little skeptical, and I'd

> No, RAID0 (Striping) can write in parallel half of the data on each disk
> where as the read must get all the correct blocks.
> On RAID1 (mirroring), you write the the data compoletely on each disk
> and read from only one of both.

So with RAID 1 I'll get write-speeds like I only have one drive with
no RAID, but with RAID 0 I'll get write speeds like I have umm... RAID
0...  but the read speeds are (effectively) the same, correct?

> The performance is usually irrelevant for the decision of RAID1 vs
> RAID0.

True...  makes me wonder about the frequency of drive faliures.  Those
who have experience: how often does that happen?  I've never had a
drive fail in my life, but I'm your normal desktop user so my PC is
off for about 8-16 hours a day.

Dang it, fate would have it that I should leave now, but I shall
return (sometime later today, if fate does not conspire against me...)

Thanks for all your great help and I hope that I'll someday be able to
repay you all in kind...

--
=== GCB v3.1 ===
GCS d-(+) s+:- a? C+() UL+++() P L++(+++)
E- W+(+++) N++ w--- M>++ PS-- PE Y+ PGP- t++(+++) 5?
X? R !tv>-- b++> DI+++> D-- G !e h(*) !r x---
=== EGCB v3.1 ===



Re: RAID

2005-11-10 Thread Bernd Petrovitsch
On Thu, 2005-11-10 at 11:30 -0800, lordSauron wrote:
[...]
> So in more clear and less confused words, does RAID 1 really share the
> advantages in speed of RAID 0?  I'm just a little skeptical, and I'd

No, RAID0 (Striping) can write in parallel half of the data on each disk
where as the read must get all the correct blocks.
On RAID1 (mirroring), you write the the data compoletely on each disk
and read from only one of both.

But with RAID1 you can loose one disk and use the filesystem without any
break where the death of one disk on RAID 0 renders the filesystem
unusable and thus dead.

> like to know if there's any testimonies of people that have actually
> had the chance to find out.

The performance is usually irrelevant for the decision of RAID1 vs
RAID0.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread lordSauron
On 11/10/05, Albert Dengg <[EMAIL PROTECTED]> wrote:
> see http://en.wikipedia.org/wiki/Redundant_Array_of_Independent_Disks

I just got to read that most excellent article and it raised one
question about the theoretical configuration:

RAID 1 or 0?  It said that 0+1 would mirror two RAID 0 arrays - very
useful, if I only had 4 disks...  So, since I'm only getting two, I
just wondered whether RAID 1 really has the ability to give
performance increase similar to RAID 0.  Since I'm going to get two 80
Gb SATA150s, I  don't want to sacrifice performance, since eventually
the machine will become my webserver (yay!) though I'm still not sure
I'll need that much performance... but nevertheless it doesn't hurt to
be prepared.

So in more clear and less confused words, does RAID 1 really share the
advantages in speed of RAID 0?  I'm just a little skeptical, and I'd
like to know if there's any testimonies of people that have actually
had the chance to find out.



Re: RAID

2005-11-10 Thread lordSauron
On 11/10/05, Colin Baker <[EMAIL PROTECTED]> wrote:
> For what you would end up spending on such a card, you're probably
> better off just buying a faster hard drive.

So I'm really going to be in the neighborhood of ~$100 USD or more for
the card alone?

Well, a faster HDD won't help much, since I can only go up to SATA150
without buying a better controller card.  However, if I can RAID 0
some cheap drives together, that'd be great, since I could have a
bigger drive and faster.

So, if I were to get two SATA150 drives, plug 'em into my box, and
shove in the Debian Sarge amd64 install disk, how easy would the
install be?  Could a dolt like me (as in not terribly command-line
savvy) get through with only sub-fatal injuries?

--
=== GCB v3.1 ===
GCS d-(+) s+:- a? C+() UL+++() P L++(+++)
E- W+(+++) N++ w--- M>++ PS-- PE Y+ PGP- t++(+++) 5?
X? R !tv>-- b++> DI+++> D-- G !e h(*) !r x---
=== EGCB v3.1 ===



Re: RAID

2005-11-10 Thread Jean-Luc Coulon (f5ibh)

Le 10.11.2005 20:03:30, lordSauron a écrit :

So, if I can get a decent RAID 0 PCI card that'd be better than using
my CPU time on it, correct?  Even if I'm a cheap person and get one of
the worst cards there is, would that be faster or slower than software
RAID?  Particularly Linux's software RAID.


Probably one of these cheap card is not really hardware raid. It is  
raid relying on a software module/drover and a BIOS.
As most of these solutions are lelying on proprietary binary only  
modules, you will not find easily a suitable module for amd64  
architecture. Software raid is reliable and efficient on a linux system.
Totally hardware raid systems are fine but most are using scsi which  
are not really cheap...


Jean-Luc


pgpoIevNKdZQp.pgp
Description: PGP signature


Re: RAID

2005-11-10 Thread Colin Baker

lordSauron wrote:


So, if I can get a decent RAID 0 PCI card that'd be better than using
my CPU time on it, correct?  Even if I'm a cheap person and get one of
the worst cards there is, would that be faster or slower than software
RAID?  Particularly Linux's software RAID.
 



For what you would end up spending on such a card, you're probably 
better off just buying a faster hard drive.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 11:03:30AM -0800, lordSauron wrote:
> So, if I can get a decent RAID 0 PCI card that'd be better than using
> my CPU time on it, correct?  Even if I'm a cheap person and get one of
> the worst cards there is, would that be faster or slower than software
> RAID?  Particularly Linux's software RAID.

RAID0 doesn't require any calculations other than which block to read
from (nor does RAID1), so software and hardware should be the same speed
really.  Booting from raid0 would probably be tricky.  If you make /boot
be a plain simple partition on the first disk, and install the boot
loader to the MBR of the first disk, then you can use raid0 for the rest
of your stuff across the remaining space without any boot problems.

RAID3/4/5/6 is where XOR calculations come in, and hardware raid might
be worth while.  Hardware raid is also nice for making booting simple
since the whole raid appears as one disk to the OS.  Fake raid does that
too of course.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread lordSauron
So, if I can get a decent RAID 0 PCI card that'd be better than using
my CPU time on it, correct?  Even if I'm a cheap person and get one of
the worst cards there is, would that be faster or slower than software
RAID?  Particularly Linux's software RAID.



Re: RAID

2005-11-10 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 09:57:02AM -0800, lordSauron wrote:
> On 11/10/05, Albert Dengg <[EMAIL PROTECTED]> wrote:
> > well it has to map the raid device to the real discs...like with all
> > raid levels
> >
> > also...
> > raid 0 isnt really raid...
> > raid means _redundant_ array of independet/inexpensive discs...
> > and raid 0 isnt redundant at all
> 
> I'm interested in RAID 0 b/c I want drive speed, not drive
> reliability.  I've got a full-tower ATX case which runs amazingly cool
> and quiet, so I'm not concerned with my drives freaking out and
> spontaniously dying.
> 
> So the current installer does not support the motherboard's RAID,
> which is slower, but what about setting up linux's kernel's RAID?  Can
> the current Sarge installer do that?
> 
> Also, I know about RAID 0, 1, and 50, but what on earth is RAID 5 and
> 6?  I think RAID 5 has to do with networked JBODs, but I'm not sure...

RAID0 (not raid) is stripping data over multiple disks to increase
performance.

RAID1 is mirroring data across 2 disks to increase reliability (at the
cost of half your disk space).

RAID 3 4 and 5 are stripping with parity across multiple devices to
increase speed and reliability, although it requries xor calculations
liek there is no tomorrow and hence often is only done with a hardware
xor engine for acceleration.  Modern CPUs with good MMX/SSE/etc
algorithms are not bad at it either though.  Raid 5 stripes the parity
data across the disks to avoid a heavy load on a single disk storing
parity data.  Raid 3 and 4 store the parity data on one device only.
Costs you one disk worth of disk space out of your total space (so you
can 700G if you have 8 * 100G drives in raid 3/4/5).

RAID6 is RAID5 with ECC.  It stores a second set of parity data to allow
error correction.  It costs you two disks worth of space out of the
total (so you get 600G if you have 8 * 100G drives).  It can I believe
tolerate 2 disk failures without data loss (although at that point you
get no error correction anymore).

Supposedly from what I have seen, RAID10 is disks stripped, then mirror
two identical size RAID0s, while RAID01 is disks mirrored, and then
stripe the mirrors, and RAID50 is two RAID5s mirrored.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 05:33:18PM +0100, Lionel Elie Mamane wrote:
> raid1? Isn't that the mirroring one? Yes, obviously, this doesn't need
> any special support at all. For me "raid support" means RAID 0/5/6.

Well having /boot on raid1 is perfectly simple and reliable enough for
me.  If you want to make the rest of the system use something else,
that's reasonable, but unless you are willing to pay for hardware raid,
getting the boot loader to deal with anymore than raid1 seems unlikely.
After booting (loading kernel/initrd) having / and whatever else you
have on lvm and/or raid5 or raid0 is reasonable too.  They just require
too much knowledge (and reading multiple devices) to boot which makes
boot loader support a big pain.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Albert Dengg

lordSauron wrote:


...
I'm interested in RAID 0 b/c I want drive speed, not drive
reliability.  I've got a full-tower ATX case which runs amazingly cool
and quiet, so I'm not concerned with my drives freaking out and
spontaniously dying.

So the current installer does not support the motherboard's RAID,
which is slower, but what about setting up linux's kernel's RAID?  Can
the current Sarge installer do that?

Also, I know about RAID 0, 1, and 50, but what on earth is RAID 5 and
6?  I think RAID 5 has to do with networked JBODs, but I'm not sure...
 


raid4: 2 or more disc with the data spread around, 1 with checksum
if one disc fails you still have the data...
raid5: the same with one exception: instead of having one disc deticated 
to cheksum, it is spread around the disc (first block 
data-data-checksum, secound data-checksum-data, and so on)


point is: you do not duplicated the data, you just loose the capacity of 
one disc
(cheaper and less data to be writte to the discs...so in the best case 
faster/nearly as fast as raid0 in reading)


raid6: a quite new mode witch alows to discs to fail before losing data...

see http://en.wikipedia.org/wiki/Redundant_Array_of_Independent_Disks

yours
Albert


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Albert Dengg

Albert Dengg wrote:


Lionel Elie Mamane wrote:


Yes, but I just configure it to use one of the discs and dynamically
switch it to the other one if the disc goes bad.

 


whats the difference with other raid 0/4/5/6?
you have to write to data to the bootsector of the discs that 
physically is bootet..
(ok with level 0 you can skip writing to all discs since if one is 
shot you have a non functioning array  anyway)



ah sorry just an error in thought...
but: whats the point of having /boot on raid0?
and for 5/6: the amount of space wasted is maginal

yours
Albert


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread lordSauron
On 11/10/05, Albert Dengg <[EMAIL PROTECTED]> wrote:
> well it has to map the raid device to the real discs...like with all
> raid levels
>
> also...
> raid 0 isnt really raid...
> raid means _redundant_ array of independet/inexpensive discs...
> and raid 0 isnt redundant at all

I'm interested in RAID 0 b/c I want drive speed, not drive
reliability.  I've got a full-tower ATX case which runs amazingly cool
and quiet, so I'm not concerned with my drives freaking out and
spontaniously dying.

So the current installer does not support the motherboard's RAID,
which is slower, but what about setting up linux's kernel's RAID?  Can
the current Sarge installer do that?

Also, I know about RAID 0, 1, and 50, but what on earth is RAID 5 and
6?  I think RAID 5 has to do with networked JBODs, but I'm not sure...

Thanks for your help - you're really giving me a lot of excellent
information to think about!

--
=== GCB v3.1 ===
GCS d-(+) s+:- a? C+() UL+++() P L++(+++)
E- W+(+++) N++ w--- M>++ PS-- PE Y+ PGP- t++(+++) 5?
X? R !tv>-- b++> DI+++> D-- G !e h(*) !r x---
=== EGCB v3.1 ===



Re: RAID

2005-11-10 Thread Albert Dengg

Lionel Elie Mamane wrote:


Yes, but I just configure it to use one of the discs and dynamically
switch it to the other one if the disc goes bad.

 


whats the difference with other raid 0/4/5/6?
you have to write to data to the bootsector of the discs that physically 
is bootet..
(ok with level 0 you can skip writing to all discs since if one is shot 
you have a non functioning array  anyway)


yours
Albert


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Albert Dengg

Lionel Elie Mamane wrote:




raid1? Isn't that the mirroring one? Yes, obviously, this doesn't need
any special support at all. For me "raid support" means RAID 0/5/6.
 


and to the list...
sorry to the OP for the pm...
i'm currently not at home and not completly used to the mailsetup...

well it has to map the raid device to the real discs...like with all
raid levels

also...
raid 0 isnt really raid...
raid means _redundant_ array of independet/inexpensive discs...
and raid 0 isnt redundant at all

yours
Albert


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Jean-Luc Coulon (f5ibh)

Le 10.11.2005 17:33:18, Lionel Elie Mamane a écrit :

On Thu, Nov 10, 2005 at 09:36:09AM -0500, Lennart Sorensen wrote:
> On Thu, Nov 10, 2005 at 02:32:50PM +0100, Lionel Elie Mamane wrote:

>> I wasn't aware that grub supports linux software raid. It is not
>> mentioned in the documentation, at least. How do you make it work?

> I run /boot on a raid1 software raid.

raid1? Isn't that the mirroring one? Yes, obviously, this doesn't need
any special support at all. For me "raid support" means RAID 0/5/6.



But you have to write grub on the mbr of both boot devices if you want  
to be able to reboot in case of a failure of one of the disks in the  
raid1



--
Lionel


Jean-Luc


pgp7bmWchxz2I.pgp
Description: PGP signature


Re: RAID

2005-11-10 Thread Lionel Elie Mamane
On Thu, Nov 10, 2005 at 09:36:09AM -0500, Lennart Sorensen wrote:
> On Thu, Nov 10, 2005 at 02:32:50PM +0100, Lionel Elie Mamane wrote:

>> I wasn't aware that grub supports linux software raid. It is not
>> mentioned in the documentation, at least. How do you make it work?

> I run /boot on a raid1 software raid.

raid1? Isn't that the mirroring one? Yes, obviously, this doesn't need
any special support at all. For me "raid support" means RAID 0/5/6.

-- 
Lionel


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Lennart Sorensen
On Thu, Nov 10, 2005 at 02:32:50PM +0100, Lionel Elie Mamane wrote:
> I wasn't aware that grub supports linux software raid. It is not
> mentioned in the documentation, at least. How do you make it work?

I run /boot on a raid1 software raid.  I just installed grub to /dev/sda
and /dev/sdb and it works.  It knows how to read the filesystem from the
partition used for raid1 (which is very easy after all).

The support was mentioned in the changelog sometime in the last year.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Lennart Sorensen
On Wed, Nov 09, 2005 at 10:42:26PM -0800, lordSauron wrote:
> okay... so, let's just say that hypothetically I choose to be a little
> less than intelligent and try to use the motherboard built-in software
> RAID.  Would that work?

Not with the sarge installer for sure.  Maybe with the next installer at
some point.  There seems to be discussions of supporting dmraid in the
future.

> Also, I'm a little skeptical of Linux's software RAID being better
> (ie. faster) than my motherboards.  Is there any statistics I can look
> at in regard to this?

Well everyone that has tried both seem to agree the linux software raid
is much faster.  I guess having open source and many people look at it
and work on it has resulted in some rather efficient raid code, while
the makers of the fake raids on the motherboard just care that it works
with windows.  Given the performance of software raid in windows, it
doesn't take much to be faster than that, so it seems the fake raids
don't try very hard.

Having full software control over rebuilds is also a nice feature to
have.

Being able to see the state of the raid from software is nice.

Being able to recover the raid by moving it to an entirely different
system without worrying about the on disk format is very nice to have
(and is a problem with any hardware or fake raid setup).

> Once more, thanks for your considerable patience in dealing with me -
> someday I might evolve into a semi-respectable linux sub-guru...

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Lionel Elie Mamane
On Thu, Nov 10, 2005 at 12:54:09AM -0800, Andrew Sharp wrote:

> Since both lilo and grub support linux software raid, there really
> isn't anything to discuss,

I wasn't aware that grub supports linux software raid. It is not
mentioned in the documentation, at least. How do you make it work?

-- 
Lionel


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-10 Thread Andrew Sharp
On Thu, Nov 10, 2005 at 01:46:08AM -0600, Colin Baker wrote:
> lordSauron wrote:
> 
> >okay... so, let's just say that hypothetically I choose to be a little
> >less than intelligent and try to use the motherboard built-in software
> >RAID.  Would that work?
> >
> >Also, I'm a little skeptical of Linux's software RAID being better
> >(ie. faster) than my motherboards.  Is there any statistics I can look
> >at in regard to this?
> >
> >Once more, thanks for your considerable patience in dealing with me -
> >someday I might evolve into a semi-respectable linux sub-guru...
> > 
> >
> 
> Don't have any numbers to point you toward, but I would expect them to 
> be pretty much the same.  In either case, your CPU handles the 
> mirroring/striping.  One method (linux or your motherboard's controller) 
> may just be more efficient about it than the other.

Exactly.  The important thing to remember is that the "onboard" chipset
raid IS software raid, just not linux kernel software raid.  Other than
that, the only difference is that chipset raid provides boot support
for clueless loaders.  Say "Winblows."  But it's really just software
raid in the BIOS, with the CPU doing all the heavy lifting as usual.
Since both lilo and grub support linux software raid, there really isn't
anything to discuss, and linux software raid is supported with all the
tools and whatnot.  There's just no reason at all to use the chipset raid.

a


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread Colin Baker

lordSauron wrote:


okay... so, let's just say that hypothetically I choose to be a little
less than intelligent and try to use the motherboard built-in software
RAID.  Would that work?

Also, I'm a little skeptical of Linux's software RAID being better
(ie. faster) than my motherboards.  Is there any statistics I can look
at in regard to this?

Once more, thanks for your considerable patience in dealing with me -
someday I might evolve into a semi-respectable linux sub-guru...
 



Don't have any numbers to point you toward, but I would expect them to 
be pretty much the same.  In either case, your CPU handles the 
mirroring/striping.  One method (linux or your motherboard's controller) 
may just be more efficient about it than the other.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread lordSauron
okay... so, let's just say that hypothetically I choose to be a little
less than intelligent and try to use the motherboard built-in software
RAID.  Would that work?

Also, I'm a little skeptical of Linux's software RAID being better
(ie. faster) than my motherboards.  Is there any statistics I can look
at in regard to this?

Once more, thanks for your considerable patience in dealing with me -
someday I might evolve into a semi-respectable linux sub-guru...



Re: RAID

2005-11-09 Thread Lennart Sorensen
On Wed, Nov 09, 2005 at 10:15:52AM -0800, lordSauron wrote:
> I'm not going to get into pusing for a new taxonomy for RAID technology.
> 
> Yeah, that's my problem ; )
> 
> So, I should be able to get a generic JBOD working via RAID technology
> done in Linux, but my nForce3 nvRAID stuff is crap so don't use it,
> right?
> 
> If you say don't use the nvRAID, then you will inevitably hear from me
> again asking *how* to do it.
> 
> Thanks for your help so far though!

Setup the controller to run plain disks.  Then configure those disks
into a software raid in linux.  Not sure the installer supports anything
other than raid1, but I don't remember since that is the only option I
ever wanted so I didn't pay much attension to it.

Running LVM accross multiple disks is (almost) as unreliable as raid0,
without the performance gains of stripping (unless you explicitly tell
it to stripe it at create time, but later size changes will not pay
attension to the striping either way).  I run LVM only on top of raid1
devices so that at least I am not too likely to loose a chunk of the
LVM.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread Lennart Sorensen
On Wed, Nov 09, 2005 at 11:15:04AM -0600, Colin Baker wrote:
> Getting way off-topic here, but I really wish people would stop calling 
> raid 0 a raid, as there is nothing redundant about it.  Just aid 0, maybe?

0 levels of redundancy = raid0 ? :)

I never thought it made sense either.  Calling it stripping sure makes
more sense.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread lordSauron
On 11/9/05, Alexander Charbonnet <[EMAIL PROTECTED]> wrote:
> > Getting way off-topic here, but I really wish people would stop calling
> > raid 0 a raid, as there is nothing redundant about it.  Just aid 0, maybe?

I'm not going to get into pusing for a new taxonomy for RAID technology.

> That's the "0" part.  There's zero RAID happening!

Yeah, that's my problem ; )

So, I should be able to get a generic JBOD working via RAID technology
done in Linux, but my nForce3 nvRAID stuff is crap so don't use it,
right?

If you say don't use the nvRAID, then you will inevitably hear from me
again asking *how* to do it.

Thanks for your help so far though!

--
=== GCB v3.1 ===
GCS d-(+) s+:- a? C+() UL+++() P L++(+++)
E- W+(+++) N++ w--- M>++ PS-- PE Y+ PGP- t++(+++) 5?
X? R !tv>-- b++> DI+++> D-- G !e h(*) !r x---
=== EGCB v3.1 ===



Re: RAID

2005-11-09 Thread Alexander Charbonnet
> Getting way off-topic here, but I really wish people would stop calling
> raid 0 a raid, as there is nothing redundant about it.  Just aid 0, maybe?

That's the "0" part.  There's zero RAID happening!


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread Colin Baker

wrote:



Well a single 80G or 120G is rather nice.  Raid0 is just not worth it
unless you have enough drives to run raid0 + raid1 (usually 4 drives so
you mirror each pair and then stripe the mirror sets).  Sometimes called
raid10 or raid01 (Depending on the order you do it in stripe mirrors or
mirror the stripes).
 



Getting way off-topic here, but I really wish people would stop calling 
raid 0 a raid, as there is nothing redundant about it.  Just aid 0, maybe?



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-09 Thread bounce-debian-amd64=archive=mail-archive . com
On Tue, Nov 08, 2005 at 08:58:36PM -0800, lordSauron wrote:
> I don't know... I think that the nVidia nForce 3's RAID is on the
> contoller level, so I think it should work, but I'm not sure.

No it is at the BIOS/software driver level.  The BIOS is just software
in rom.  No one yet puts an xor/comparitor engine into generic ide/sata
controllers, since it is much cheaper not to do it and waste cpu cycles
doing raid for the minority of users that want it.

> Yeah, that hit me rather suddenly in the car on the way home... 
> However, I don't have the raw cash to buy anything over about 120Gb.

Well a single 80G or 120G is rather nice.  Raid0 is just not worth it
unless you have enough drives to run raid0 + raid1 (usually 4 drives so
you mirror each pair and then stripe the mirror sets).  Sometimes called
raid10 or raid01 (Depending on the order you do it in stripe mirrors or
mirror the stripes).

Prices I see for disks around where I live (Canada):
80G $67 ($0.83/GB)
160G $90 ($0.56/GB)
200G $105 ($0.52/GB)
250G $123 ($0.49/GB)
320G $167 ($0.52/GB)

So my last drive I got a couple of months ago was a 250G.

Apparently the 40 and 120G drives were discontinued (last I checked the
40G was about $4 less than the 80G, and the 120G was about $5 less than
the 160G.)

I tend to have machines with way more ram and disk space than their
cpu/age would generally warrent.  So Athlon 700 with 768M ram and
2*80G+1*250G disk space, and a 486/66 with 48M ram and 18G disk space.
I always seem to need more disk space and ram than I need cpu power so
that is what I tend to upgrade.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-08 Thread Lennart Sorensen
On Tue, Nov 08, 2005 at 12:14:52PM -0800, lordSauron wrote:
> Hi, I was just doing some information gathering for a potential
> upgrade I want to perform.
> 
> I have a ECS Elitegroup nForce 3-A motherboard, which has built-in
> RAID 0/1/0+1 support.  I want to get a 80 Gb SATA150 drive, but if I
> can get RAID to work with linux, I'd love to pay more but get two 40
> Gb SATA150 drives, and RAID 0 them together.  *However,* when I tried
> this with a ATA 27 Gb drive and a IDE 14.7 Gb drive (a IBM Deskstar
> and a Segate brand drive respectively) and then tried it with the
> amd64 installer (which supported kernel ver. 2.6.x-11, the x being
> holes in my memory) it didn't work (or at least I couldn't make it
> work, but I'm still not that great at installing so it's totally
> possible I just majorly screwed up and did something stupid along the
> line...).  I think it'd work with the newer kernels (>= -12) b/c with
> the -11 my m-board integrated audio and LAN didn't work, but now they
> do... so I think that it's just that the -11 kernel didn't yet have
> the device drivers for these components, and now that these things
> work, the RAID should work too, right?  I just wanted to see if anyone
> knew the status of this...
> 
> Thanks for any info you can give me!

Use software raid in linux.  The installer can allow yo to set it up.

No desktop motherboard has hardware raid onboard.  Many have fake raid
however which is simply the bios pretending to be raid until the driver
(usually proprietary) takes over doing raid.  It is all software, and
usually not as fast or efficient as what linux can do in software.

Now why would you buy 40G drives when they cost (at least around here)
the same as 80G drives.  2 x 40G would cost twice the price of a single
80G.  Buying a pair of 160 or 200G drives would make much more sense.
Also larger drives are denser and hence faster than smaller drives, so
you may actually get less performance striping two smaller drives, and
the reliability goes way down since you have two points of failure
instead of one.  Insane setup really.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-27 Thread Erik Mouw
On Wed, Jul 27, 2005 at 10:13:10AM +0900, ? wrote:
> A few weeks ago, I have been done some benchmark with areca and 3ware9500.
> I used my own benchmark program which issue sequeuntil read and write
> request.
> 
> Result :::
> 
> 1 ~ 10 client :  roughly areca is better than 3ware about 15%.
> 10 ~20 client : almost same.
> 20 ~ client : 3ware performance is stable, but areca is not draw stable 
> performance graph.
>I think that areca raid driver is not matured

That's right, the Areca driver needs work (Christoph Hellwig pointed
out some issues), but Andrew Morton already put it in the -mm tree so
it gets testing.

> Test Environment :::
> kernel 2.4.27 stock 
>   2.6.12 stock

I hope you didn't compare a 2.4.27 3ware kernel against a 2.6.12 areca
kernel, cause in that case you're also comparing the networking, nfs,
xfs, and block level code.


Erik

-- 
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread 김경표

- Original Message - 
From: "Bharath Ramesh" <[EMAIL PROTECTED]>
To: "Erik Mouw" <[EMAIL PROTECTED]>
Cc: "Lennart Sorensen" <[EMAIL PROTECTED]>; "Jerome Warnier" <[EMAIL 
PROTECTED]>; "debian-amd64" 
Sent: Wednesday, July 27, 2005 1:31 AM
Subject: Re: RAID controllers


> * Erik Mouw ([EMAIL PROTECTED]) wrote:
> > On Tue, Jul 26, 2005 at 11:37:25AM -0400, Lennart Sorensen wrote:
> > > On Tue, Jul 26, 2005 at 05:23:05PM +0200, Jerome Warnier wrote:
> > > > Anybody here has an experience with Sil3114 or Areca RAID SATA
> > > > controllers?
> > > > 
> > > > I'm wondering which one is best on AMD64, or if I stick with a 3Ware
> > > > Escalade 8600 instead.
> > > > 
> > > > I'm planning to use Debian Sarge, of course.
> > > 
> > > I haven't used any of them, but from what I have read, the 3ware drivers
> > > are very mature and have been around for a while.  The Areca drivers I
> > > suspect you have to compile and install yourself (making installing
> > > somewhat more difficult on one). 
> > 
> > The -mm kernels come with Areca drivers. they are at at least included
> > in 2.6.13-rc3-mm1. Andrew Morton keeps a set of broken out patches, so
> > you can also patch other kernels:
> > 
> > http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc3/2.6.13-rc3-mm1/broken-out/
> > 
> > I think you need areca-raid-linux-scsi-driver.patch and
> > areca-raid-linux-scsi-driver-fix.patch (in that order).
> > 
> > > The few benchmarks I have seen indicate the 3ware is easily the
> > > fastest of them.
> > 
> > The benchmarks in the respected german computer magazine c't suggests
> > the Areca cards are faster.
> 
> Areca according to the benchmarks is definitely a better card. I don't
> remember when I read but I did review of a bunch of RAID controllers
> done under Linux. Depending on you need you would need to decide if you
> want to use a host based or an intelligent RAID controller. I would
> personally go with the intelligent RAID controller. The advantage of
> using Areca is that they have native SATA support unlike 3ware or
> adaptec which have their proprietary crap which is still light years
> behind. You don't get the bang for the money. But if you want tried and
> tested RAID controller I would go with the 3ware/adaptec. If you are
> going towards a host based controller I would suggest looking at the
> Raid Core BC 4852 controller. Its supposed to be really good, not sure
> about the linux support.
> 
> Bharath
> 
> ---
> Bharath Ramesh   <[EMAIL PROTECTED]>   
> http://csgrad.cs.vt.edu/~bramesh
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 
A few weeks ago, I have been done some benchmark with areca and 3ware9500.
I used my own benchmark program which issue sequeuntil read and write request.

Result :::

1 ~ 10 client :  roughly areca is better than 3ware about 15%.
10 ~20 client : almost same.
20 ~ client : 3ware performance is stable, but areca is not draw stable 
performance graph.
   I think that areca raid driver is not matured

Test Environment :::
kernel 2.4.27 stock 
  2.6.12 stock
filesystem xfs
network protocol : NFS
nfs client : 2.4.21
architeture : Intel xeon 2.4GHz
memory : 2G

Re: RAID controllers

2005-07-26 Thread Bharath Ramesh
* Erik Mouw ([EMAIL PROTECTED]) wrote:
> On Tue, Jul 26, 2005 at 11:37:25AM -0400, Lennart Sorensen wrote:
> > On Tue, Jul 26, 2005 at 05:23:05PM +0200, Jerome Warnier wrote:
> > > Anybody here has an experience with Sil3114 or Areca RAID SATA
> > > controllers?
> > > 
> > > I'm wondering which one is best on AMD64, or if I stick with a 3Ware
> > > Escalade 8600 instead.
> > > 
> > > I'm planning to use Debian Sarge, of course.
> > 
> > I haven't used any of them, but from what I have read, the 3ware drivers
> > are very mature and have been around for a while.  The Areca drivers I
> > suspect you have to compile and install yourself (making installing
> > somewhat more difficult on one). 
> 
> The -mm kernels come with Areca drivers. they are at at least included
> in 2.6.13-rc3-mm1. Andrew Morton keeps a set of broken out patches, so
> you can also patch other kernels:
> 
> http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc3/2.6.13-rc3-mm1/broken-out/
> 
> I think you need areca-raid-linux-scsi-driver.patch and
> areca-raid-linux-scsi-driver-fix.patch (in that order).
> 
> > The few benchmarks I have seen indicate the 3ware is easily the
> > fastest of them.
> 
> The benchmarks in the respected german computer magazine c't suggests
> the Areca cards are faster.

Areca according to the benchmarks is definitely a better card. I don't
remember when I read but I did review of a bunch of RAID controllers
done under Linux. Depending on you need you would need to decide if you
want to use a host based or an intelligent RAID controller. I would
personally go with the intelligent RAID controller. The advantage of
using Areca is that they have native SATA support unlike 3ware or
adaptec which have their proprietary crap which is still light years
behind. You don't get the bang for the money. But if you want tried and
tested RAID controller I would go with the 3ware/adaptec. If you are
going towards a host based controller I would suggest looking at the
Raid Core BC 4852 controller. Its supposed to be really good, not sure
about the linux support.

Bharath

---
Bharath Ramesh   <[EMAIL PROTECTED]>   http://csgrad.cs.vt.edu/~bramesh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Mickael Marchand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Lennart Sorensen a écrit :
> On Tue, Jul 26, 2005 at 05:23:05PM +0200, Jerome Warnier wrote:
> 
>>Anybody here has an experience with Sil3114 or Areca RAID SATA
>>controllers?
>>
>>I'm wondering which one is best on AMD64, or if I stick with a 3Ware
>>Escalade 8600 instead.
>>
>>I'm planning to use Debian Sarge, of course.
> 
> 
> I haven't used any of them, but from what I have read, the 3ware drivers
> are very mature and have been around for a while.  The Areca drivers I
> suspect you have to compile and install yourself (making installing
> somewhat more difficult on one).  The few benchmarks I have seen
> indicate the 3ware is easily the fastest of them.
well my experience with 3ware 8xxx tells the exact contrary,
their RAID5 sucks as hell. With 2 of these cards with many new Maxtor
drives, I was changing a hard drive every week !
I won't tell you how many times it totally broke after a hard drive
failure. It never was able to reconstruct a working array after a failure.
Even 3ware developers were unable to help me , though we had long mails
for testing stuff.
I also stopped counting _hard_ freeze crash after the 2nd week
(especially on amd64...).

since, I switched to linux software raid5 and raid6 over Sil3114
chipsets and 3ware cards recycled in JBOD mode (passthrough mode), I
have changed two disks in 6 months and it went smoothly.

> 
> The Sil3114 is of course NOT a hardware raid controller.  It is
> proprietary software raid, and really not recomended in general.  dmraid
> is not well supported yet, which is what you would need to use that
> stuff.
their own raid is probably not recommended,
linux software raid works like a charm over it. (using raid6 and 5 at work).

my (own and personal) conclusion is that 3ware's cards (and probably no
single hard raid card) beats linux software implementation both in
stability and performance.

just my 2c.

Cheers,
Mik

> 
> Len Sorensen
> 
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFC5l3IyOYzc4nQ8j0RApSSAKCHr2USQCSG6nEeQMY3YHLb3qcEFQCfchwv
o+jHTdbv1XWUPxEMIM9C2UI=
=54lM
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Lennart Sorensen
On Tue, Jul 26, 2005 at 06:12:19PM +0200, Erik Mouw wrote:
> The -mm kernels come with Areca drivers. they are at at least included
> in 2.6.13-rc3-mm1. Andrew Morton keeps a set of broken out patches, so
> you can also patch other kernels:
> 
> http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc3/2.6.13-rc3-mm1/broken-out/
> 
> I think you need areca-raid-linux-scsi-driver.patch and
> areca-raid-linux-scsi-driver-fix.patch (in that order).

Well nice to see someone else getting into the linux business.

> The benchmarks in the respected german computer magazine c't suggests
> the Areca cards are faster.

All the benchmarks I have seen were run on windows, so who knows what to
make of those. :)

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Erik Mouw
On Tue, Jul 26, 2005 at 11:37:25AM -0400, Lennart Sorensen wrote:
> On Tue, Jul 26, 2005 at 05:23:05PM +0200, Jerome Warnier wrote:
> > Anybody here has an experience with Sil3114 or Areca RAID SATA
> > controllers?
> > 
> > I'm wondering which one is best on AMD64, or if I stick with a 3Ware
> > Escalade 8600 instead.
> > 
> > I'm planning to use Debian Sarge, of course.
> 
> I haven't used any of them, but from what I have read, the 3ware drivers
> are very mature and have been around for a while.  The Areca drivers I
> suspect you have to compile and install yourself (making installing
> somewhat more difficult on one). 

The -mm kernels come with Areca drivers. they are at at least included
in 2.6.13-rc3-mm1. Andrew Morton keeps a set of broken out patches, so
you can also patch other kernels:

http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc3/2.6.13-rc3-mm1/broken-out/

I think you need areca-raid-linux-scsi-driver.patch and
areca-raid-linux-scsi-driver-fix.patch (in that order).

> The few benchmarks I have seen indicate the 3ware is easily the
> fastest of them.

The benchmarks in the respected german computer magazine c't suggests
the Areca cards are faster.


Erik

-- 
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Bharath Ramesh
* Jerome Warnier ([EMAIL PROTECTED]) wrote:
> Anybody here has an experience with Sil3114 or Areca RAID SATA
> controllers?
> 
> I'm wondering which one is best on AMD64, or if I stick with a 3Ware
> Escalade 8600 instead.

We have the 3ware Escalade 8600 in our storage server and their Linux
support is really good. The 3ware disk monitor system they have is also
supports amd64, it has nice web interface from which you can do all
needed maintenance on the RAID. If you have the extra money I would buy
the 3ware 9500 series controller instead. They have a some extra
features like support multiple cards.

Bharath

---
Bharath Ramesh   <[EMAIL PROTECTED]>   http://csgrad.cs.vt.edu/~bramesh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Lennart Sorensen
On Tue, Jul 26, 2005 at 05:59:07PM +0200, Mickael Marchand wrote:
> well my experience with 3ware 8xxx tells the exact contrary,
> their RAID5 sucks as hell. With 2 of these cards with many new Maxtor
> drives, I was changing a hard drive every week !
> I won't tell you how many times it totally broke after a hard drive
> failure. It never was able to reconstruct a working array after a failure.
> Even 3ware developers were unable to help me , though we had long mails
> for testing stuff.
> I also stopped counting _hard_ freeze crash after the 2nd week
> (especially on amd64...).
> 
> since, I switched to linux software raid5 and raid6 over Sil3114
> chipsets and 3ware cards recycled in JBOD mode (passthrough mode), I
> have changed two disks in 6 months and it went smoothly.

Certainly sounds weird.  Many other people swear by the 3ware cards.  I
have also talked to people that say they just use them as very nice high
port count sata controllers and run md raid on them to get the fastest
performance.

> their own raid is probably not recommended,
> linux software raid works like a charm over it. (using raid6 and 5 at work).

Well anything you can treat as just an ide/sata controller is always
nice as long as the drivers for for it in linux.

> my (own and personal) conclusion is that 3ware's cards (and probably no
> single hard raid card) beats linux software implementation both in
> stability and performance.

Given modern cpu speeds, I suspect that is probably true.

Personally I run md raid1.

My experience with IBM ServeRaid 4 cards is not impresive performance
wise.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID controllers

2005-07-26 Thread Lennart Sorensen
On Tue, Jul 26, 2005 at 05:23:05PM +0200, Jerome Warnier wrote:
> Anybody here has an experience with Sil3114 or Areca RAID SATA
> controllers?
> 
> I'm wondering which one is best on AMD64, or if I stick with a 3Ware
> Escalade 8600 instead.
> 
> I'm planning to use Debian Sarge, of course.

I haven't used any of them, but from what I have read, the 3ware drivers
are very mature and have been around for a while.  The Areca drivers I
suspect you have to compile and install yourself (making installing
somewhat more difficult on one).  The few benchmarks I have seen
indicate the 3ware is easily the fastest of them.

The Sil3114 is of course NOT a hardware raid controller.  It is
proprietary software raid, and really not recomended in general.  dmraid
is not well supported yet, which is what you would need to use that
stuff.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID (was: "rock solid" motherboard)

2005-03-11 Thread Paul Brook
> > This is just plain wrong. I suggest you go and read some good
> > documentation on the properties of different RAID formats. Generally
> > speaking RAID0 doubles throughput for large writes as data is striped
> > across both volumes, and has seek times the same as a single drive. RAID1
> > (mirroring) usually gives similar throughput to a single drive for large
> > transfers, and a good driver can reduce average seek times by reading
> > data from whichever drive head is nearest.
>
> When would the heads ever not be in the same place on a raid1 setup?  I
> would certainly be happiest if the drive always read both disks and
> compared the reads to detect problems (but then again if it did, which
> one of two different values would be correct?).  I would have thought
> the heads were mostly in sync at all times.

Consider you have two processes reading for different files. The kernel can 
read both files in parallel using one drive for each. The heads will only 
tend to be aligned after a write, even then the kernel could write data out 
to the drives in a different order.

RAID1 does not provide any consistency checking, or protection against data 
corruption. The only thing it protects against is total and catastrophic 
drive failure. The same is true of raid4 and raid5.

One of the advantages of software raid is that the kernel can optimise it's IO 
workload to minimise head movement. This is also possible with hardware 
controllers if they support command queueing (TCQ/NCQ). I've no idea if 
dmraid does this or not.

> > Not true. Read http://linux.yyz.us/sata/faq-sata-raid.html.
> > Most (maybe all?) of the proprietary raid formats are supported by the
> > dmraid driver.
>
> But is proprietary software raid ever faster/more efficient than linux
> md raid? 

No. The only advantage is that if you dual boot you can use the same raid set 
in windows. Normal md raid is more flexible, better tested.

> And what if your board dies and you have to move the drives to 
> another machine, how do you get the raid back?

I think the dmraid drivers will work with any controller, though I've never 
tried it.

Paul


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376(FastTrak376)(rev02)

2005-03-02 Thread sarge netinst debian-pure64
Goswin wrote:
I expect the raid chips will actually improve in the future and wonder
when they will arrive. Some of the older SCSI RAID controllers were
probably fairly good and the newer SATA RAID controllers should have
been introduced with at least that capability.
Why should they improve? There are better chips out there but they
cost more. The crappyness of the softraid chips is a choice dictaed by
money and that won't change.

Goswin-
I have decided the on-board RAID chips really are garbage, even the VIA.
This review shows the performance improvement is only slightly better if
at all (except that was XP):
http://techreport.com/reviews/2004q2/chipset-raid/index.x?pg=28
This Linux XFS test probably proves the software raid is the winner:
http://lists.debian.org/debian-amd64/2005/02/msg00780.html

These results now conclude my raid chip misadventure.
Thanks to all-
[EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-02 Thread sarge netinst debian-pure64
David Liontooth wrote:
Both Fasttrack and Via are now supported -- cf. the Linux software raid dmraid 
driver
at http://people.redhat.com/~heinzm/sw/dmraid, which lists these:
Highpoint HPT37X
Highpoint HPT45X
Intel Software RAID
LSI Logic MegaRAID
NVidia NForce
Promise FastTrack
Silicon Image Medley
VIA Software RAID

+++
Dave-
Thanks for the new information. This seems to be what I was imagining. I 
am just beginning to examine the dmraid.

Thanks-
[EMAIL PROTECTED]

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-02 Thread Goswin von Brederlow
sarge netinst debian-pure64 <[EMAIL PROTECTED]> writes:

> So I wonder whether my raid chips are smart_raid or dumb_raid and
> whether they can save the CPU cycles or not while remaining error free.

Thats realy easy. If the chip works in non raid mode but not raid mode
then it is dumb. Real hardware raid is transparent.

> I expect the raid chips will actually improve in the future and wonder
> when they will arrive. Some of the older SCSI RAID controllers were
> probably fairly good and the newer SATA RAID controllers should have
> been introduced with at least that capability.

Why should they improve? There are better chips out there but they
cost more. The crappyness of the softraid chips is a choice dictaed by
money and that won't change.

> I am afraid I do have cheap junk instead and so md raid is still the
> best choice.

It makes less and less difference. Nowadays raid does ~5GB/s troughput
at full cpu speed. Since harddisks and, more importantly, the PCI bus
has less bandwith less cpu is used respectively.

As a sidenote: My software raid on the VIA chips is way faster, at the
cost of a little cpu, than a 3ware hardware raid because it connects
to the HT bus instead of PCI like the 3ware.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-02 Thread David Liontooth
[EMAIL PROTECTED] wrote:
I have the onboard Promise and VIA chips with raid features. I also use
those now as individual disks with the raid turned off. Unless I missed
something, I think there is no raid support for these chips under Linux.
Lots of fun stuff happening with Linux raid -- cf. 
http://linux.yyz.us/sata/faq-sata-raid.html
Both Fasttrack and Via are now supported -- cf. the Linux software raid 
dmraid driver
at http://people.redhat.com/~heinzm/sw/dmraid, which lists these:

Highpoint HPT37X
Highpoint HPT45X
Intel Software RAID
LSI Logic MegaRAID
NVidia NForce
Promise FastTrack
Silicon Image Medley
VIA Software RAID
Jeff Garzik has just added PATA support to the libata driver -- looks 
like IDE will
become obsolete altogether. Some of the Promise cards also have libata 
support,
not yet in the kernel, but likely to be merged by 2.6.12 and already 
included
in some vendor kernels (gentoo and ubuntu).

Dave
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-01 Thread editor
David Liontooth wrote:
sarge netinst debian-pure64 wrote:
Does anybody use any of the RAID features on any motherboard for Linux?

Not on my amd64 I don't (I use a 3ware card), but on a dual xeon I use 
the sata_sil driver for an onboard Silicon Image 3112 controller and run 
software raid (md) off it -- software raid in Linux seems to be 
excellent. The system hasn't had much heavy use yet so I can't really 
vouch for robustness, but my experience so far is very positive (I had 
better luck with raidtools2 than with mdadm). Software raid looks more 
flexible -- it looks like you should be able to create a RAID5 array, 
say, from a bunch of disks from different controllers.

Dave
+++
Dave-
I have the onboard Promise and VIA chips with raid features. I also use
those now as individual disks with the raid turned off. Unless I missed
something, I think there is no raid support for these chips under Linux.
I haven't used md for several years. Even then md had many capabilities
beyond the capabilities of the newest on-board raid controller chips
that I have discovered using plain DOS with the raid turned on. No
special support was required for these chips using raid under DOS.
I never did experience any data corruption or errors of any kind using
md until one of the two disks failed and then I lost all of the data on
the disk and in my RAID0 partition. That was probably due to inadequate
cooling instead of md. I am sure md is even more robust now.
Thanks-
[EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-01 Thread sarge netinst debian-pure64
Len Sorensen wrote:
With the current trend of "win*" devices with more and more loading
firmware by driver, and off loading processing to the CPU all in the name
of cutting manufacturing costs (and wasting the end users expensive cpu)
I doubt we are going to get real raid chips on the mainboard ever.
people even seem willing to buy sound cards that don't off load any of
the signal processing while spending lots of money on the video card to
get higher frame rates in their games (while the sound card just ate 10%
of the cpu and framerate because it is cheap junk).
As long as the average computer buyer is clueless about computers (and
why shouldn't they be, it's a rather complicated piece of electronics),
there will continue to be companies making cheap junk that has impresive
looking specs for less money than something better.
Does anybody use any of the RAID features on any motherboard for Linux?
I prefer to use something I can trust, like md raid in linux.  At least
then I get the source code to the drivers for my raid.  If I have to
waste cpu cycles running a software raid, at least I want to know how it
works, and I also suspect the linux software raid is more cpu efficient
than whatever the proprietary software raid makers have put in their
bios.
+
Len-
I expect your suspicions about CPU efficiency for quality raid programs 
like md are probably accurate. I would like to know with actual tests 
except I can't run a comparison between raid software and raid chips 
under Linux because my raid chips appear to be unsupported.

I agree with your preference for saving the CPU cycles with better 
peripherals. The cheap integrated audio chip on my motherboard used 45% 
of the CPU running mpg321 when my emu10k1 uses less than 1% with better 
sound. Some motherboards now have the Creative chip on-board. My Radeon 
produces 2100 fps using the GPU chip for OpenGL using only 15% of the 
CPU while the software OpenGL produces only a 200-300 fps using almost 
all of the CPU cycles. Even my Postscript printers have brains and use 
none of the CPU at all to make a raster. I admit intelligent chips can 
produce better results without wasting the CPU cycles I want instead.

So I wonder whether my raid chips are smart_raid or dumb_raid and 
whether they can save the CPU cycles or not while remaining error free.

I expect the raid chips will actually improve in the future and wonder 
when they will arrive. Some of the older SCSI RAID controllers were 
probably fairly good and the newer SATA RAID controllers should have 
been introduced with at least that capability.

I am afraid I do have cheap junk instead and so md raid is still the 
best choice.

Thanks-
[EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-03-01 Thread Lennart Sorensen
On Mon, Feb 28, 2005 at 09:17:46PM -0700, sarge netinst debian-pure64 wrote:
> There is no such thing as a TOY_raid module. Use a bios_disk module.
> 
> The GNU Parted manual says:
> 
> 3.1 5. The operating system may or may not use the BIOS to do normal
> file system access (Windows usually does, Linux or BSD do not).
> 
> So Windows goes to the BIOS to find the disk. The disk c: in DOS for
> example could be an array of disks like /dev/hde, /dev/hdf, /dev/sda and
> /dev/sdb and the DOS user would never know. The sata_promise doesn't
> know those disks are in an array so the module should ask the BIOS or
> there should be a way to tell the sata_promise those disks are used in
> an array not individually. (If the sata_promise wanted to do RAID.)
> 
> The bios_disk module asks the BIOS how the disks are configured and
> doesn't care if that is an array or just individual disks because the
> BIOS does all the work. That should be as good as a TOY_raid module.
> 
> Maybe someday the on-board RAID controllers will be good enough to use.
> Then RAID support might be needed.

With the current trend of "win*" devices with more and more loading
firmware by driver, and off loading processing to the CPU all in the name
of cutting manufacturing costs (and wasting the end users expensive cpu)
I doubt we are going to get real raid chips on the mainboard ever.
people even seem willing to buy sound cards that don't off load any of
the signal processing while spending lots of money on the video card to
get higher frame rates in their games (while the sound card just ate 10%
of the cpu and framerate because it is cheap junk).

As long as the average computer buyer is clueless about computers (and
why shouldn't they be, it's a rather complicated piece of electronics),
there will continue to be companies making cheap junk that has impresive
looking specs for less money than something better.

> Does anybody use any of the RAID features on any motherboard for Linux?

I prefer to use something I can trust, like md raid in linux.  At least
then I get the source code to the drivers for my raid.  If I have to
waste cpu cycles running a software raid, at least I want to know how it
works, and I also suspect the linux software raid is more cpu efficient
than whatever the proprietary software raid makers have put in their
bios.

Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev 02)

2005-03-01 Thread Michal Schmidt
Superuserman wrote:
The BIOS can set the disks up as RAID0, RAID1 or
RAID0+1 or as individual disks. How can I tell the
sata_promise I am using RAID mode not individual disks?
You can use normal Linux software RAID (CONFIG_BLK_DEV_MD in kernel 
configuration). It works on any controller. You have to set it up using 
standard Linux tools.
See http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

Michal
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev02)

2005-02-28 Thread sarge netinst debian-pure64
 [EMAIL PROTECTED] wrote:
 > I think the Promise RAID mode is a TOY RAID controller.
 >
 > What about Intel and Sil? This seems to be a problem with
 > i386 also for all motherboards with a TOY RAID in the BIOS.
 >
 > Thanks for the links. The VIA RAID mode and Promise RAID
 > mode have a BIOS ROM chip not a PGA chip so they aren't
 > hardwired. The BIOS can set the disks up as RAID0, RAID1 or
 > RAID0+1 or as individual disks. How can I tell the
 > sata_promise I am using RAID mode not individual disks?
 >
 +
There is no such thing as a TOY_raid module. Use a bios_disk module.
The GNU Parted manual says:
3.1 5. The operating system may or may not use the BIOS to do normal
file system access (Windows usually does, Linux or BSD do not).
So Windows goes to the BIOS to find the disk. The disk c: in DOS for
example could be an array of disks like /dev/hde, /dev/hdf, /dev/sda and
/dev/sdb and the DOS user would never know. The sata_promise doesn't
know those disks are in an array so the module should ask the BIOS or
there should be a way to tell the sata_promise those disks are used in
an array not individually. (If the sata_promise wanted to do RAID.)
The bios_disk module asks the BIOS how the disks are configured and
doesn't care if that is an array or just individual disks because the
BIOS does all the work. That should be as good as a TOY_raid module.
Maybe someday the on-board RAID controllers will be good enough to use.
Then RAID support might be needed.
Does anybody use any of the RAID features on any motherboard for Linux?
[EMAIL PROTECTED]

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376)(rev 02)

2005-02-28 Thread Superuserman
[EMAIL PROTECTED] wrote:
The native Promise RAID mode is now a big turn off. 

Beacuse there is no such thing. Your Promise is not a HW RAID controller.
Please see
http://linux.yyz.us/sata/faq-sata-raid.html and
http://linux.yyz.us/sata/software-status.html#pata
Michal
+
Michal-
No, Promise RAID mode is not a HW RAID controller.
I think the Promise RAID mode is a TOY RAID controller.
I already tried the HW_raid module and that didn't work. I
tried the sata_promise module and that didn't work either. I
can't find the TOY_raid module for Promise. I have a VIA
RAID also on the motherboard and think that also uses the
TOY_raid module.
Does anybody have the TOY_raid module for Promise or VIA?
What about Intel and Sil? This seems to be a problem with
i386 also for all motherboards with a TOY RAID in the BIOS.
Thanks for the links. The VIA RAID mode and Promise RAID
mode have a BIOS ROM chip not a PGA chip so they aren't
hardwired. The BIOS can set the disks up as RAID0, RAID1 or
RAID0+1 or as individual disks. How can I tell the
sata_promise I am using RAID mode not individual disks?
I don't think the sata_promise was designed to use the
native Promise RAID mode. Is there someplace that says not
to use the sata_promise for native Promise RAID mode?
I don't disagree with the sata_promise not supporting the
native Promise RAID mode because TOY RAID is bad RAID and
not good enough for Linux. Did I miss something someplace?
Maybe I should say:
The native Promise RAID mode is a big don't turn on.

[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-25 Thread Michal Schmidt
Superuserman wrote:
The native Promise RAID mode is now a big turn off.
Beacuse there is no such thing. Your Promise is not a HW RAID controller.
Please see
http://linux.yyz.us/sata/faq-sata-raid.html and
http://linux.yyz.us/sata/software-status.html#pata
Michal
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-25 Thread Superuserman
Goswin wrote:
Hi,
I got 2 new SATA drives and pluged them into the onboard Promise
PDC20376 (FastTrak 376) and run badblocks on them (both in
parallel). The write test completed fine but read&compare locked up my
system.
I repeated with a read-only test and again it locked up with an DMA
timeout.
Does anyone have the onboard Promise successfully in use on an Asus
K8V board?
MfG
Goswin

Goswin-
My experience is now negative for the Promise controller
under linux for the Promise mirroring mode.
Writing to the SATA sda2 with an ext2 filesystem was not
mirrored to the hidden PATA at all. The Promise controller
did not notice the discrepancy and did not want to rebuild
the array nor permit rebuilding when prompted.
I think this is a failure of the Promise controller to cope
with the ext2 filesystem or perhaps to more than one
partition without regard to the way the sata_promise driver
writes to the array.
There was no way found to mirror less than a full disk.
The disks are handled differently by DOS and Linux under
the current sata_promise driver. The DOS makes two disks
available when mirroring is turned off and one disk when
mirroring is turned on. Linux now presents one disk whether
mirroring is on or off. The new 2.6.11 is supposed to
contain changes in the way the disks can be accessed.
I don't think the Promise controller can natively cope with
ext2 filesystems and probably many more. The Promise
controller says the array is functional when the mirror is
bad. My tests showed the RAID was not mirroring at all
while saying the array was functioning properly.
If my testing was accurate, the Promise controller fails
in mirroring and the native RAID features are unusable for
Linux. I have no information about the striping features.
I have no plans to use the native Promise RAID modes in the
future. They have limited capability to offer and failed in
serious ways during my several installations.
The native Promise RAID mode is now a big turn off.

[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376(FastTrak376) (rev 02)

2005-02-24 Thread Superuserman
Goswin wrote:
Does anyone have the onboard Promise successfully in use on an Asus
K8V board?
MfG
Goswin

+
Goswin-
Promise is GARBAGE for mirroring.
I activated the Promise RAID TX2plus in my ASUS AV8 Deluxe with one
SATA and one PATA in a RAID mirror mode. The status was Critical for
every reboot because the mirror didn't match and had to be
rebuilt. This was in DOS not Linux.
What has that got to do with it working under linux amd64 at all?
Linux doesn't even use the promise raid code but has to emulate it
with its own code. And everybody with half a brain uses linux own
software raid instead of ata raid emulations.
MfG
Goswin
+++
Goswin-
Well I just got the Promise working under DOS so I have to 
take back what I said. I chose the Create and Initialize 
option first without luck and the the Create only worked.
  ???

I have one DOS partition and one Linux partition in the 
mirror. My next step is to see if Promise works as a mirror 
under Linux as sda2.

The PATA is not in Linux only used by the Promise, at least 
so far. The 2.6.11 kernel is supposed to have access to the 
PATA allowing software raid.

My theory is if the hardware can't handle DOS then what can 
they do? Like winmodems for example.

I used to use md on my Thinkpad until I fried one drive. :-(
Maybe I will know about Linux by tomorrow. Or tonight.

[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-24 Thread Goswin von Brederlow
Superuserman <[EMAIL PROTECTED]> writes:

> Goswin wrote:
>
>> Hi,
>> I got 2 new SATA drives and pluged them into the onboard Promise
>> PDC20376 (FastTrak 376) and run badblocks on them (both in
>> parallel). The write test completed fine but read&compare locked up my
>> system.
>> I repeated with a read-only test and again it locked up with an DMA
>> timeout.
>> Does anyone have the onboard Promise successfully in use on an Asus
>> K8V board?
>> MfG
>> Goswin
>
>
> +
>
> Goswin-
>
>
> Promise is GARBAGE for mirroring.
>
> I activated the Promise RAID TX2plus in my ASUS AV8 Deluxe with one
> SATA and one PATA in a RAID mirror mode. The status was Critical for
> every reboot because the mirror didn't match and had to be
> rebuilt. This was in DOS not Linux.

What has that got to do with it working under linux amd64 at all?

Linux doesn't even use the promise raid code but has to emulate it
with its own code. And everybody with half a brain uses linux own
software raid instead of ata raid emulations.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-24 Thread Superuserman
Goswin wrote:
Hi,
I got 2 new SATA drives and pluged them into the onboard Promise
PDC20376 (FastTrak 376) and run badblocks on them (both in
parallel). The write test completed fine but read&compare locked up my
system.
I repeated with a read-only test and again it locked up with an DMA
timeout.
Does anyone have the onboard Promise successfully in use on an Asus
K8V board?
MfG
Goswin

+
Goswin-
Promise is GARBAGE for mirroring.
I activated the Promise RAID TX2plus in my ASUS AV8 Deluxe 
with one SATA and one PATA in a RAID mirror mode. The status 
was Critical for every reboot because the mirror didn't 
match and had to be rebuilt. This was in DOS not Linux.

The mirror mode is just a batch DISKCOPY upon command.
I initialized the mirror and FDISK showed only drive f: so 
the mirroring was active. I copied a file to drive f: and 
then rebooted without then rebuilding the mirror and turned 
the RAID off. There was nothing in drive g:! I turned the 
RAID back on and rebuilt the mirror and then went back to 
IDE and the file was then in drive g:!

Pure garbage.
The Promise mirroring is Garbage. Maybe Promise striping is 
better.

Does the AMD64 have enough power to use md with Promise 
turned off?


[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376(FastTrak376) (rev 02)

2005-02-23 Thread Matthias Julius
Goswin von Brederlow <[EMAIL PROTECTED]> writes:

> It is supported by the source as such, it has the PCI id in its
> list. The problem is it doesn't seem to work right. I have a TX4 too
> which works perfectly.
>
> It seems noone else is using the onboard 20376 so it might not be
> surprising that bugs haven't been found.

I have an ASUS K8V with a Promise PDC20376.  It is running under
kernel 2.6.9-9-amd64-k8 without problems so far.  I probably have
never put a very high load on it.

Matthias


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID bus controller: Promise Technology, Inc. PDC20376(FastTrak376) (rev 02)

2005-02-23 Thread Goswin von Brederlow
the owner <[EMAIL PROTECTED]> writes:

> Goswin wrote:
>
>> I got 2 new SATA drives and pluged them into the onboard Promise
>> PDC20376 (FastTrak 376) and run badblocks on them (both in
>> parallel). The write test completed fine but read&compare locked up my
>> system.
>> I repeated with a read-only test and again it locked up with an DMA
>> timeout.
>> Does anyone have the onboard Promise successfully in use on an Asus
>> K8V board?
>
> +++
>
> Goswin-
>
> Mine is the PDC20378 aka TX2 with 2SATA+PATA
> controlled by the sata_promise kernel module.
>
> The PDC20376 is probably very similar, maybe 133 vs 150 or
> something trivial.

It is supported by the source as such, it has the PCI id in its
list. The problem is it doesn't seem to work right. I have a TX4 too
which works perfectly.

It seems noone else is using the onboard 20376 so it might not be
surprising that bugs haven't been found.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Re: RAID bus controller: Promise Technology, Inc. PDC20376(FastTrak376) (rev 02)

2005-02-23 Thread the owner
Goswin wrote:
I got 2 new SATA drives and pluged them into the onboard Promise
PDC20376 (FastTrak 376) and run badblocks on them (both in
parallel). The write test completed fine but read&compare locked up my
system.
I repeated with a read-only test and again it locked up with an DMA
timeout.

Does anyone have the onboard Promise successfully in use on an Asus
K8V board?
+++
Goswin-
Mine is the PDC20378 aka TX2 with 2SATA+PATA
controlled by the sata_promise kernel module.
The PDC20376 is probably very similar, maybe 133 vs 150 or
something trivial.
That should be listed as TX4/TX2 in the kernel config under
low level scsi drivers. The TX4 is a 4SATA chip and uses the
same driver as the TX2.
The TX2 and TX4 were add-in cards using the PDC2037x chips.
I built a kernel with the promise enabled and there were no
problems.
Maybe the K8 Motherboard List should list sata_promise not
PDC2037x to be consistent.
The Promise SATA is production code and aught to be working.
PATA maybe not depending on the kernel, never did try.

[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe Pro ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-23 Thread Goswin von Brederlow
the owner <[EMAIL PROTECTED]> writes:

>
> Greetings Goswin-
>
>
> I have the Promise disabled in the BIOS. I was going to use
> my two 250G PATA in a raid only changed my mind.
>
> Have you checked out the source? http://linux.yyz.us/sata/

Doesn't list that model.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID bus controller: Promise Technology, Inc. PDC20376 (FastTrak376) (rev 02)

2005-02-22 Thread the owner
Goswin wrote:
Hi,
I got 2 new SATA drives and pluged them into the onboard Promise
PDC20376 (FastTrak 376) and run badblocks on them (both in
parallel). The write test completed fine but read&compare locked up my
system.
I repeated with a read-only test and again it locked up with an DMA
timeout.
Does anyone have the onboard Promise successfully in use on an Asus
K8V board?
MfG
Goswin
+++
Greetings Goswin-
I have the Promise disabled in the BIOS. I was going to use
my two 250G PATA in a raid only changed my mind.
Have you checked out the source? http://linux.yyz.us/sata/

[EMAIL PROTECTED]
--
AMD Athlon64 @2.4Ghz  Corsair TwinX1024-3200XL @480Mhz DDR 
ASUS A8V Deluxe Pro ASUS Radeon A9250Ge/Td/256
Creative SB Live  WD Raptor SATA  Maxtor PATA  IBM/Okidata 
Color PostScript3  Lexmark Mono PostScript2
Debian GNU/Linux debian-amd64/pure64 sid  Kernel 2.6.10 
XFree86 4.3.0  Xorg X11R6.8.2  1.5 Mb DSL

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]