no one can help me?

Am Mittwoch, 8. April 2020 18:16:36 UTC+2 schrieb birgit.ducarroz:
>
> Hi list, 
>
> I wonder which is the best config for a backup on a single tape (no 
> autochanger). 
>
> First of all: I have 2 separated bareos Servers: one is backing-up only 
> on HD's and the second one will be used to do full-backups once a month 
> to single tapes. Both servers are actually NOT connected to each other. 
> So my question here refers only to the tape server. 
>
> AIM: Monthly full backup on a tape, speed up restore and reduce 
> interruptions on the tape itself. 
>
> I'm reading the doc but I'm not sure if I understand it correctly. 
> I configured a mount with some 15TB as spool directory. 
>
> Now my questions: 
> a) When using a big spool directory on HD's - is it recommended to use 
> also copy/migration jobs? 
>
> b) How much Maximum Concurrent Jobs do you recommend for a tape if the 
> aim is to speed up a restore? 
>
> Actually, I thought to configure my tape device like this: 
> cat /bareos-sd.d/device/TapeStorage.conf 
>
> Device { 
>    Name = tape 
>    Drive Index = 0 
>    Media Type = tape 
>    Archive Device = /dev/tape/by-id/scsi-350050763121460f3 
>    LabelMedia = yes;                        # lets Bareos label 
> unlabeled media 
>    AutomaticMount = yes;                    # when device opened, read it 
>    RemovableMedia = yes; 
>    AlwaysOpen = no; 
>    RandomAccess = no; 
>    Spool Directory = /mnt/TapeSpool 
>    Maximum Concurrent Jobs = 5 
>    Description = "Tape device. A connecting Director must have the same 
> Name and MediaType." 
> } 
>
> Thank you for any hints! 
> Kind regards, 
> Birgit 
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/4b1dc419-947d-4995-806c-f2b9089306f7%40googlegroups.com.

Reply via email to