On 2018-07-30 00:38, Kamil Jońca wrote:
Gene Heskett <ghesk...@shentel.net> writes:

On Saturday 28 July 2018 08:30:27 Kamil Jońca wrote:

Gene Heskett <ghesk...@shentel.net> writes:

[..]

Too many dumps per spindle, drive seeks take time=timeout?

As I can see in gdb/strace planner hangs on "futex"
'futex' is short for 'Fast Userspace muTEX', it's a synchronization primitive. Based on personal experience (not with Amanda, but just debugging software hangs in general), this usually means it's either a threading issue, or that you've ended up with a deadlock somewhere between processes. Regardless, it's probably an issue on the local system, and most likely only happens when backing up more than one client because you have more processes/threads involved and actually doing things in that case.

This is probably going to sound stupid, but try updating/rebuilding/reinstalling Perl, whatever Perl packages Amanda depends on (I don't remember which packages they are), and Amanda itself. Most of the time when I see this kind of issue, it ends up being a case of at-rest data corruption in the executables or libraries, and reinstalling the problem software typically fixes things.


1. I do not configure spindle at all.

So its possible to have multiple dumps from the same spindle at the same
time.

No. There is another parameter,
--8<---------------cut here---------------start------------->8---
      maxdumps int
            Default: 1. The maximum number of backups from a single host
            that Amanda will attempt to run in parallel. See also the
            inparallel option.
--8<---------------cut here---------------end--------------->8---

And I use default value, so I have at most one dump per host at once
(and I am quite happy with this)

Of course I can change spindles for testing, but, to be honest, I do
not understand, how should that help.



Please, give every disk in each machine its own unique spindle number.
Your backups should be done much faster.

I do not want faster dumps . I want working dumps.

KJ


Reply via email to