On 2017-11-13 11:11, Gene Heskett wrote:
On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:

On 2017-11-13 09:56, Gene Heskett wrote:
On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
On 2017-11-11 01:49, Jon LaBadie wrote:
Just a thought.  My amanda server has seven hard drives
dedicated to saving amanda data.  Only 2 are typically
used (holding and one vtape drive) during an amdump run.
Even then, the usage is only for about 3 hours.

So there is a lot of electricity and disk drive wear for
inactive drives.

Can todays drives be unmounted and powered down then
when needed, powered up and mounted again?

I'm not talking about system hibernation, the system
and its other drives still need to be active.

Back when 300GB was a big drive I had 2 of them in
external USB housings.  They shut themselves down
on inactivity.  When later accessed, there would
be about 5-10 seconds delay while the drive spun
up and things proceeded normally.

That would be a fine arrangement now if it could
be mimiced.

Aside from what Stefan mentioned (using hdparam to set the standby
timeout, check the man page for hdparam as the numbers are not
exactly sensible), you may consider looking into auto-mounting each
of the drives, as that can help eliminate things that would keep
the drives on-line (or make it more obvious that something is still
using them).

I've investigated that, and I have amanda wrapped up in a script
that could do that, but ran into a showstopper I've long since
forgotten about.  Al this was back in the time I was writing that
wrapper, years ago now. One of the show stoppers AIR was the fact
that only root can mount and unmount a drive, and my script runs as
amanda.

While such a wrapper might work if you use sudo inside it (you can
configure sudo to allow root to run things as the amanda user without
needing a password, then run the wrapper as root), what I was trying
to refer to in a system-agnostic manner (since the exact mechanism is
different between different UNIX derivatives) was on-demand
auto-mounting, as provided by autofs on Linux or the auto-mount daemon
(amd) on BSD.  When doing on-demand auto-mounting, you don't need a
wrapper at all, as the access attempt will trigger the mount, and then
the mount will time out after some period of inactivity and be
unmounted again.  It's mostly used for network resources (possibly
with special auto-lookup mechanisms), as certain protocols (NFS in
particular) tend to have issues if the server goes down while a share
is mounted remotely, even if nothing is happening on that share, but
it works just as well for auto-mounting of local fixed or removable
volumes that aren't needed all the time (I use it for a handful of
things on my personal systems to minimize idle resource usage).

Sounds good perhaps. I am currently up to my eyeballs in an unrelated
problem, and I won't get to this again until that project is completed
and I have brought the 2TB drive in and configured it for amanda's
usage. That will tend to enforce my one thing at a time but do it right
bent. :)  What I have is working for a loose definition of working...
Yeah, I know what that's like. Prior to switching to amanda where I worked, we had a home-grown backup system that had all kinds of odd edge cases I had to make sure never happened. I'm extremely glad we decided to stop using that, since it means I can now focus on more interesting problems (in theory at least, we're having an issue with our Amanda config right now too, but thankfully it's not a huge one).

But if I allow the 2TB to be  unmounted and self-powered down, once
daily, what shortening of its life would I be subjected to?  In other
words, how many start-stop cycles can it survive?
It's hard to be certain. For what it's worth though, you might want to test this to be certain that it's actually going to save you energy. It takes a lot of power to get the platters up to speed, but it doesn't take much to keep them running at that speed. It might be more advantageous to just configure the device to idle (that is, park the heads) after some time out and leave the platters spinning instead of spinning down completely (and it should result in less wear on the spindle motor).

Interesting, I had started a long time test yesterday, and the reported
hours has wrapped in the report, apparently at 65636 hours. Somebody
apparently didn't expect a drive to last that long? ;-)  The drive?
Healthy as can be.
That's about 7.48 years, so I can actually somewhat understand not going past 16-bits for that since most people don't use a given disk for more than about 5 years worth of power-on time before replacing it. However, what matters is really not how long the device has been powered on, but how much abuse the drive has taken. Running 24/7 for 5 years with no movement of the system (including nothing like earthquakes), in a temperature, humidity, and pressure controlled room will get you near zero wear on anything in the drive but the bearings and possibly the heads. In contrast, that same five years of runtime in a laptop that's being taken all over the place will usually result in a drive that has numerous errors in addition to noticeable mechanical wear.

Reply via email to