nap...@squareownz.org wrote:
> On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote:
>> Mark Knecht wrote:
>>> On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1...@gmail.com> wrote:
>>>> Alan McKinnon wrote:
>>> <SNIP>
>>>>> My thoughts these days is that nobody really makes a bad drive anymore.
>>>>> Like cars[1], they're all good and do what it says on the box. Same
>>>>> with bikes[2].
>>>>>
>>>>> A manufacturer may have some bad luck and a product range is less than
>>>>> perfect, but even that is quite rare and most stuff ups can be fixed
>>>>> with new firmware. So it's all good.
>>>>
>>>>
>>>> That's my thoughts too.  It doesn't matter what brand you go with, they
>>>> all have some sort of failure at some point.  They are not built to last
>>>> forever and there is always the random failure, even when a week old.
>>>> It's usually the loss of important data and not having a backup that
>>>> makes it sooooo bad.  I'm not real picky on brand as long as it is a
>>>> company I have heard of.
>>>>
>>>
>>> One thing to keep in mind is statistics. For a single drive by itself
>>> it hardly matters anymore what you buy. You cannot predict the
>>> failure. However if you buy multiple identical drives at the same time
>>> then most likely you will either get all good drives or (possibly) a
>>> bunch of drives that suffer from similar defects and all start failing
>>> at the same point in their life cycle.  For RAID arrays it's
>>> measurably best to buy drives that come from different manufacturing
>>> lots, better from different factories, and maybe even from different
>>> companies. Then, if a drive fails, assuming the failure is really the
>>> fault of the drive and not some local issue like power sources or ESD
>>> events, etc., it's less likely other drives in the box will fail at
>>> the same time.
>>>
>>> Cheers,
>>> Mark
>>>
>>>
>>
>>
>>
>> You make a good point too.  I had a headlight to go out on my car once
>> long ago.  I, not thinking, replaced them both since the new ones were
>> brighter.  Guess what, when one of the bulbs blew out, the other was out
>> VERY soon after.  Now, I replace them but NOT at the same time.  Keep in
>> mind, just like a hard drive, when one headlight is on, so is the other
>> one.  When we turn our computers on, all the drives spin up together so
>> they are basically all getting the same wear and tear effect.
>>
>> I don't use RAID, except to kill bugs, but that is good advice.  People
>> who do use RAID would be wise to use it.
>>
>> Dale
>>
>> :-)  :-)
>>
> 
> hum hum!
> I know that Windows does this by default (it annoys me so I disable it)
> but does linux disable or stop running the disks if they're inactive?
> I'm assuming there's an option somewhere - maybe just `unmount`!
> 


The default is to keep them all running and to not spin them down.  I
have never had a Linux OS to spin down a drive unless I set/told it to.
 You can do this tho.  The command and option is:

hdparm -S /dev/sdX

X would be the drive number.  There is also the -s option but it is not
recommended.

There is also the -y and -Y options.  Before using ANY of these, read
the man page.  Each one has it uses and you need to know for sure which
one does what you want.

Dale

:-)  :-)

-- 
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!

Miss the compile output?  Hint:
EMERGE_DEFAULT_OPTS="--quiet-build=n"

Reply via email to