Re: [Bacula-users] bacula archive/off-site-backup to S3 storage, what's the best way?

2020-02-17 Thread Radosław Korzeniewski
Hello,

wt., 11 lut 2020 o 18:41 Erik Geiger  napisał(a):

> You are referring to this documentation?
> https://www.bacula.org/9.4.x-manuals/en/main/New_Features_in_9_4_0.html#SECTION00300100
>

Yes.


> I wasn't able to build bacula with cloud support and I can't use the rpm
> packages from bacula as they aren't supported by the puppet module I'm
> using. So I was looking for something like an "after" job syncing to S3
> with aws cli or the like.
>

Sorry for that. You should create a ticket at bugs.bacula.org to show that
something is wrong about it.


> But if the Cloud Storage functionality is the way to go I'll figure out
> how to compile with S3 support.
> So if I get the documentation right the backup is first stored to the
> local disk and afterwards moved to the cloud while I could still do a
> restore from local disk as long as I configure the configure the "cache
> Retention", right?
>

The default local disk backup for cloud storage works as a cache only with
fully configurable behavior and retention. You can use it as the single
archive storage, so during backup all your data will be saved on local
disks and then synced into S3 as configured. You can use it as a DR storage
using Copy Jobs where your local disks will be your main storage which will
be copied into a storage cloud after a local backup. All possible
configurations depends on your requirements.
I hope it helps.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [EXTERNAL] Re: bacula cloud backup

2020-02-17 Thread Radosław Korzeniewski
Hello,

wt., 11 lut 2020 o 19:19 Dimitri Maziuk via Bacula-users <
bacula-users@lists.sourceforge.net> napisał(a):

> On 2/11/20 10:59 AM, Radosław Korzeniewski wrote:
>
> > Because a community _always_ know better then developers. :) Just read
> some
> > hot threads on this group.
>
> BS. This is IT 101: users don't know what they want.
>

Exclude Bacula Community users. :)

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [EXTERNAL] Re: Bacula - waiting to reserve a device

2020-02-17 Thread Radosław Korzeniewski
Hello,

wt., 11 lut 2020 o 18:30 Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] <
uthra.r@nasa.gov> napisał(a):

> Radoslaw,
>
>
>
> Bacula keeps sending me emails “waiting to reserve a device” every 2
> minutes which is annoying. Is there a way to turn off Bacula from sending
> emails only in this case?
>

It depends if you wish to limit the number of emails received or totally
disable this kind of messages.
If you want to disable this kind of messages then you can limit it on your
Messages resource definition, AFAIR. You should check message category in
documentation and disable it.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why would bacula consider duplicate jobs as fatal?

2020-02-17 Thread Radosław Korzeniewski
Hello,

śr., 12 lut 2020 o 16:23 William Muriithi 
napisał(a):

> > I think I do miss your point. Why anyone on Earth would like to
> configure a backup job in such a way that the next job will intentionally
> run when a previous job did not complete and intentionally setup
> cancelation of the duplicated job?
>
> IMVHO if I knew that my backup job is running for i.e. 8H then I'll never
> schedule next backup job on 4H period and setup a cancellation of the
> duplicate job because it will cancel every second job by design. It would
> be insane, right?
>
> That is how I initially set it up.  The problem is, sometimes, the tapes
> run out on weekend or at night and jobs fall behind.


OK, I understand now. It is not an intentional configuration but a rare
occurrences.

  So over time, you end up with 2 or 3 jobs scheduled to backup the same
> file.  Instead of restarting to cleanup the backlog, I thought rejecting
> duplicates was less involving.
>
> How do you handle these cases without configuring bacula to avoid
> duplicates?
>

I personally always configure Bacula to disable duplicates and to cancel
it. But you can do whatever you want. You can just simply allow duplicates
and just queue it so it will run as soon as all required resources become
available again. Then no failed jobs because some of the jobs delay.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why would bacula consider duplicate jobs as fatal?

2020-02-17 Thread Radosław Korzeniewski
Hello,

śr., 12 lut 2020 o 00:10 David Brodbeck  napisał(a):

>
>
> On Fri, Jan 31, 2020 at 1:54 AM Radosław Korzeniewski <
> rados...@korzeniewski.net> wrote:
>
>> Hello,
>>
>> śr., 29 sty 2020 o 17:51 William Muriithi 
>> napisał(a):
>>
>>>
>>>
>>   How would this make sense considering it was intentionally configured?
>>
>>
>> I think I do miss your point. Why anyone on Earth would like to configure
>> a backup job in such a way that the next job will intentionally run when a
>> previous job did not complete and intentionally setup cancelation of the
>> duplicated job?
>> IMVHO if I knew that my backup job is running for i.e. 8H then I'll never
>> schedule next backup job on 4H period and setup a cancellation of the
>> duplicate job because it will cancel every second job by design. It would
>> be insane, right?
>>
>
> One example of a situation where this actually makes sense is if your full
> backups take a lot longer than your incrementals. For example, I have some
> workstations where a full takes three days, but an incremental takes only a
> few minutes. I'd rather have the incremental run every day (and
> occasionally get skipped when a full backup is running) than limit myself
> to only one backup every three days.
>

In my very, very, very humble opinion it does not make sense and you design
you backup policy incorrectly. When your policy is to make backup every day
then you should not allow for full backup to take more time. In such case I
would recommend to implement VirtualFull which will solve all your issues.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula archive/off-site-backup to S3 storage, what's the best way?

2020-02-17 Thread Erik Geiger
On Mon, Feb 17, 2020 at 1:41 PM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:

> Hello,
>
> wt., 11 lut 2020 o 18:41 Erik Geiger  napisał(a):
>
>> You are referring to this documentation?
>> https://www.bacula.org/9.4.x-manuals/en/main/New_Features_in_9_4_0.html#SECTION00300100
>>
>
> Yes.
>
>
>> I wasn't able to build bacula with cloud support and I can't use the rpm
>> packages from bacula as they aren't supported by the puppet module I'm
>> using. So I was looking for something like an "after" job syncing to S3
>> with aws cli or the like.
>>
>
> Sorry for that. You should create a ticket at bugs.bacula.org to show
> that something is wrong about it.
>

Hi Radoslaw,

Turned out that I was able to build when using the libs3 provided by bacula
[https://www.bacula.org/downloads/libs3-20181010.tar.gz]
Sadly that wasn't really documented.


>
>> But if the Cloud Storage functionality is the way to go I'll figure out
>> how to compile with S3 support.
>> So if I get the documentation right the backup is first stored to the
>> local disk and afterwards moved to the cloud while I could still do a
>> restore from local disk as long as I configure the configure the "cache
>> Retention", right?
>>
>
> The default local disk backup for cloud storage works as a cache only with
> fully configurable behavior and retention. You can use it as the single
> archive storage, so during backup all your data will be saved on local
> disks and then synced into S3 as configured. You can use it as a DR storage
> using Copy Jobs where your local disks will be your main storage which will
> be copied into a storage cloud after a local backup. All possible
> configurations depends on your requirements.
> I hope it helps.
>

I do have the cloud backup running, now. All works even better than
expected regarding the S3 upload. I also realised that I can use "Cache
Retention" so the local disk won't run out of disk pace while still
allowing fast restores within the "Cache Retention" period.

Tanks again,

Erik

> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula archive/off-site-backup to S3 storage, what's the best way?

2020-02-17 Thread Radosław Korzeniewski
Hello,

pon., 17 lut 2020 o 15:27 Erik Geiger  napisał(a):

>
> On Mon, Feb 17, 2020 at 1:41 PM Radosław Korzeniewski <
> rados...@korzeniewski.net> wrote:
>
>> Hello,
>>
>> wt., 11 lut 2020 o 18:41 Erik Geiger  napisał(a):
>>
>>> You are referring to this documentation?
>>> https://www.bacula.org/9.4.x-manuals/en/main/New_Features_in_9_4_0.html#SECTION00300100
>>>
>>
>> Yes.
>>
>>
>>> I wasn't able to build bacula with cloud support and I can't use the rpm
>>> packages from bacula as they aren't supported by the puppet module I'm
>>> using. So I was looking for something like an "after" job syncing to S3
>>> with aws cli or the like.
>>>
>>
>> Sorry for that. You should create a ticket at bugs.bacula.org to show
>> that something is wrong about it.
>>
>
> Hi Radoslaw,
>
> Turned out that I was able to build when using the libs3 provided by
> bacula [https://www.bacula.org/downloads/libs3-20181010.tar.gz]
> Sadly that wasn't really documented.
>

Sorry for that. You can still fill the issue ticket about it.


>
>
>>
>>> But if the Cloud Storage functionality is the way to go I'll figure out
>>> how to compile with S3 support.
>>> So if I get the documentation right the backup is first stored to the
>>> local disk and afterwards moved to the cloud while I could still do a
>>> restore from local disk as long as I configure the configure the "cache
>>> Retention", right?
>>>
>>
>> The default local disk backup for cloud storage works as a cache only
>> with fully configurable behavior and retention. You can use it as the
>> single archive storage, so during backup all your data will be saved on
>> local disks and then synced into S3 as configured. You can use it as a DR
>> storage using Copy Jobs where your local disks will be your main storage
>> which will be copied into a storage cloud after a local backup. All
>> possible configurations depends on your requirements.
>> I hope it helps.
>>
>
> I do have the cloud backup running, now. All works even better than
> expected regarding the S3 upload. I also realised that I can use "Cache
> Retention" so the local disk won't run out of disk pace while still
> allowing fast restores within the "Cache Retention" period.
>

Great! I'm very happy that it is working now and I could help.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula windows agent VSS snapshot takes very long

2020-02-17 Thread Guido van Brakel
Hi,



I have an issue on a server where the VSS snapshot using the Windows Agent 
takes very long. I already re-registered the VSS dll’s but that doesn’t seem to 
work. Here is an example of the Bacula log:



Wrote label to prelabeled Volume 
"TrueCatalog_FileDaemon-10335_Storage03-10335-customer_Pool-6213-full_6213_2020-2-16_12-0-44"
 on file device "Device-10335-customer" (/bacula/10/Server-10335)

Max Volume jobs=1 exceeded. Marking Volume 
"TrueCatalog_FileDaemon-10335_Storage03-10335-customer_Pool-6213-full_6213_2020-2-16_12-0-44"
 as Used.

FileDaemon-10335 JobId 6533000: Generate VSS snapshots. Driver="Win64 VSS", 
Drive(s)="G"

Error: bsock.c:577 Read error from client:: ERR=No data available

Anyone able to help?



Best Regards,



Guido van Brakel


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users