On 2017-10-16 13:22, Stefan G. Weichinger wrote:
> Am 2017-10-16 um 15:20 schrieb Jean-Louis Martineau:
>> Amanda 3.5 can do everything you want only by running the amdump command.
>>
>> Using an holding disk:
>>
>> * You configure two storages
>> * All dumps go to the holding disk
>> * All dumps are copied to each storages, not necessarily at the same
>> time or in the same run.
>> * The dumps stay in holding until they are copied to both storages
>> * You can tell amanda that everything must go to both storage or only
>> some dle full/incr
> 
> 
> So it is possible to set up a mix of "normal" daily backups with
> incrementals/fulls and "archive"/vault backups with only the full
> backups of a specific day ?
> 
> I have requests to do so for a customer, until now we used amanda-3.3.9
> and 2 configs sharing most of config and disklist ...
> 
> Nathan, the OP of this thread and others (including me) would like to
> see actual examples of configuration, a howto or something.
> 
> The man page https://wiki.zmanda.com/man/amvault.8.html is a bit minimal
> ...
> 
> Is there anything additional to that manpage and maybe:
> 
> http://wiki.zmanda.com/index.php/How_To:Copy_Data_from_Volume_to_Volume
> 
> ?
While it's not official documentation, I've got a working configuration with
Amanda 3.5.0 on my personal systems, using locally accessible storage for
primary backups, and S3 for vaulting (though I vault everything, the local
storage is for getting old files back, S3 is for disaster recovery).  I've
put a copy of the relevant config fragment at the end of this reply, with
various private data replaced, and some bits that aren't really relevant
(like labeling options) elided.

For this to work reliably, you need to define a holding disk (although it
can be on the same storage as the local vtape library).  I personally start
flushing from the holding disk immediately the moment any dump is complete,
as all the data fits on one tape and the S3 upload takes longer than creating
the backups in the first place, but it should work just fine too when
buffering things on the holding disk.

The given S3 configuration assumes you already created the destination bucket
(I per-create them since I do life cycle stuff and cross-region replication,
both of which are easier to set up if you create the bucket by hand).  I also
use a dedicated IAM user for the S3 side of things for both security and
accounting reasons, but that shouldn't impact things.  Additionally, I've found
that the S3 uploads work much more reliably if you set a reasonable part size
and have part caching.  1 GB seems to give a good balance between performance
and reliability.

8<-----------------------------------

define tapetype vtape {
        length 16 GB
        part-size 1 GB
        part-cache-type memory
}

define changer local-vtl {
        tapedev "chg-disk:/path/to/local/vtapes"
}

define changer aws {
        tapedev 
"chg-multi:s3:example-bucket/slot{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}"
        device-property "S3_SSL"                "YES"
        device-property "S3_ACCESS_KEY"         "IAM_ACCESS_KEY"
        device-property "S3_SECRET_KEY"         "IAM_SECRET_KEY"
        device-property "S3_MULTI_PART_UPLOAD"  "YES"
        device-property "CREATE_BUCKET"         "NO"
        device-property "S3_BUCKET_LOCATION"    "us-east-1"
        device-property "STORAGE_API"           "AWS4"
}

define storage local {
        tapepool "local"
        tapetype "vtape"
        tpchanger "local-vtl"
}

define storage cloud {
        tapepool "s3"
        tapetype "vtape"
        tpchanger "aws"
}

storage "local"
vault-storage "cloud"

Reply via email to