On Thu, Nov 08, 2018 at 11:23:44 -0800, Chris Miller wrote:

> So, now I have a couple of "clean-up" questions to conclude this
> thread: vtape labels: Are these just an artifact of the tape heritage,
> meaning, How is the label any more restrictive/protective than the
> path to and filename of the vtape? It's not like you can inadvertently
> mount the wrong directory, is it?Well, actually, in the case of my NAS
> configuration, I guess that is possible, but unlikely except in the
> case of some sort of NAS failure and recovery. Is there a discussion
> somewhere describing how these are used and what sort of failures they
> can prevent in the vtape world?

As Jon mentioned, there are actually many different ways to set up
vtapes, expecially once you start including external USB drives or other
removable media in your mix of backup destinations.  The existence of
table labels allows Amanda to keep track of all the parts, and make sure
it's writing to the place it thinks it should be (and, therefore, to be
certain that it is *over*-writing the dumps it intended to overwrite).

In any case, table labels are such an important part of Amanda's whole
structure that there's no way to run it without them -- tape labels are
used both in all sorts of interaction with the user but also for
Amanda's internal record keeping, etc.  After a short while I think you
will see how interacting based on the tape label name is actually a lot
clearer than using a directory-path name or something along those
lines...

In your case, though (i.e. with a fixed set of always-available slot
directories), it certainly makes sense to set up an obvious 1-1 mapping
between labels and slots, so that you don't have to think about or go
hunting to figure out which vtape holds a particular label.  Amanda
won't care about that (it will always look on the vtape and check the
label found there before doing anything else), but it'll make your life
slightly easier on those situations where you need to go look at the raw
dump files or whatever.

(For that reason, I don't use a "number 0" in my tape label sequences,
but start my tape numbering with 1 to match the "slotN" directory
numbering.  But you can just as easily use tapes 0-through-(N-1) to go
with slots 1-through-N, or whatever. )


> And back to my original question about "per-clent configuration", I
> recognize that I will effectively be running N copies of AMANDA, and
> none of them will be aware of the others. I think this means that I
> have defeated the scheduler. I don't want to do that. It occurs to me
> that AMANDA does not know what any other copies are doing, which means
> that they could ALL schedule level 0 on the same night! I think I'd
> like to change my design from per-client configs in separate
> directories, to per-client qualified definitions in one amanda.conf. I
> see artifacts in various examples that lead me to believe that this
> can be done, and is probably preferable to my current scheme, because
> I then would have only one copy of AMANDA running and there can be a
> more sensible schedule. I'm most interested -- in fact, I am probably
> only interested -- that each client be able to direct backup storage
> to a location specific to that client.

Well, if in fact you are trying to dump each client to a compmletely
separate destination, then I don't think it matters whether you use
separate Amanda runs for each client, as far as the scheduling goes.

(Usually one of the points of the scheduling is to mix-and-match dump
levels from all the different clients so that when they are all combined
they fit together on that night's tape.  If you are sending each client
to a separate destination, that doesn't apply -- I think it would be
just as likely to end up scheduling all level 0s in the combined run as
separate, since the size of the dumps would be calculated separately for
each client.)

But certainly there are advantages to having just one instance running,
so it makese sense for you to at least try both setups and see which one
works better for you.  (This is exspecially true since you are running
v3.5, which has much better support for this sort of thing than has
historically been true..)

I haven't tried to do this sort of client-based separation of
backup destinations, so I can't give you examples... but a few things to
look at to help point you in the correct direction:

  1) you will need to set up multiple chg-disk: changers, each pointing
     to a client-specific path on your NAS.

  2) for each DLE in the disklist you would set a 'tag' to indicate
     which client that DLE applies to.  (Probably the easiest way to do
     that would be to create a separate dumptype for each client and
     use those dumptypes directly in the disklist.)

  3) Finally, you will need to set up a separate storage for each
     client, each referencing the tpchanger configured in 1) to write to
     the backup location for that client.  Set the 'tapepool "$r"'
     option on those storage definitions so each storage also uses its
     own separate pool, and then set the 'dump-selection' paramter to
     point to the per-client tag you set up on the DLEs.

Hopefully that gets you started :)

                                                Nathan


----------------------------------------------------------------------------
Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239

Reply via email to