Re: [Bacula-users] Offsite S3 backup

2020-12-21 Thread Žiga Žvan

Hi,
I'm using dummy S3 bucket and upload data with StorageGateway (oracle 
cloud is not supported directly), however I have some troubles with this 
setup.

Directions are here: https://blog.bacula.org/whitepapers/CloudBackup.pdf
Regards, Ziga


On 17.12.2020 18:42, Satvinder Singh wrote:

Hi,

Has anyone tested doing offsite backups to an S3 bucket? If yes, can someone 
point me in the right direction on how to?

Thanks

  


Satvinder Singh / Operations Manager
ssi...@celerium.com / Cell: 703-989-8030

Celerium
Office: 703-418-6315
www.celerium.com 

        



Disclaimer: This message is intended only for the use of the individual or 
entity to which it is addressed and may contain information which is 
privileged, confidential, proprietary, or exempt from disclosure under 
applicable law. If you are not the intended recipient or the person responsible 
for delivering the message to the intended recipient, you are strictly 
prohibited from disclosing, distributing, copying, or in any way using this 
message. If you have received this communication in error, please notify the 
sender and destroy and delete any copies you may have received.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-sd - file driver - cloud resource

2020-12-21 Thread Žiga Žvan

Hello,
I'm using file driver with cloud resource. Bacula was able to backup 
data in this way until it wrote data to new volumes. Now, after 
retention period,  I'm getting error: Fatal error: cloud_dev.c:983 
Unable to download Volume (see output below). Data on cloud path looks 
ok but data in local cache contains only part.1 without any data.


Is this expected?
Has anybody tested this scenario?
Should I avoid file driver in production environment?

Regards,
Ziga


[root@bacula db-01-weekly-vol-0365]# ls -la 
/mnt/ocisg/bacula/backup/db-01-weekly-vol-0365

total 0
drwxr-. 2 bacula disk   0 Oct 24 07:45 .
drwxr-xr-x. 2 bacula bacula 0 Dec 18 23:38 ..
-rw-r--r--. 1 bacula disk 256 Oct 24 07:43 part.1
-rw-r--r--. 1 bacula disk   35992 Oct 24 07:44 part.2
-rw-r--r--. 1 bacula disk   35993 Oct 24 07:44 part.3
-rw-r--r--. 1 bacula disk   381771773 Oct 24 07:45 part.4

[root@bacula db-01-weekly-vol-0365]# ls -la 
/storage/bacula/cloudcache/db-01-weekly-vol-0365

total 20
drwxr-.   2 bacula disk  28 Dec 11 23:10 .
drwxr-xr-x. 344 bacula bacula 16384 Dec 18 23:26 ..
-rw-r--r--.   1 bacula disk   0 Dec 11 23:10 part.1

SD config (autochanger)

Device {
  Name = FSOciCloudStandard2
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}
...
Device {
  Name = FSOciCloudStandard4
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}

Cloud {
  Name = OracleViaStorageGateway
  Driver = "File"
  HostName = "/mnt/ocisg/bacula/backup"
  BucketName = "DummyBucket"
  AccessKey = "DummyAccessKey"
  SecretKey = "DummySecretKey"
  Protocol = HTTPS
  UriStyle = VirtualHost
  Truncate Cache = AtEndOfJob
}


21-Dec 19:14 bacula-dir JobId 2073: Start Backup JobId 2073, 
Job=db-01-backup.2020-12-21_19.14.14_48
21-Dec 19:14 bacula-dir JobId 2073: Using Device "FSOciCloudStandard2" 
to write.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 db-01.prod.kr.cetrtapot.si JobId 2073: Fatal error: 
job.c:3013 Bad response from SD to Append Data command. Wanted 3000 OK data

, got len=25 msg="3903 Error append data:  "
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Marking Volume 
"db-01-weekly-vol-0365" in Error in Catalog.

21-Dec 19:14 bacula-sd JobId 2073: Fatal error: Job 2073 canceled.


On 06.12.2020 20:52, Žiga Žvan wrote:

Dear all,
I'm using bacula 9.6.5 in a production for a month now. I'm 
experiencing random backup failures from my clients. Specific hosts 
report errors like the outputs attached. The same host is able to 
perform backup at some other time. The error is more often at large 
backups (more errors at full backups than incremental, more errors at 
hosts with large data sets).


I have tried to implement heartbeat interval 
(https://www.bacula.org/9.6.x-manuals/en/main/Client_File_daemon_Configur.html#SECTION00221) 
but there is no improvement.
The error occures also on hosts in the same zone as bacula server (no 
router/firewall in between).
Storage deamon is installed on the same server as bacula director. I'm 
using File cloud driver (backup to local disk via cloud resource).


Could you please suggest a solution or a way to troubleshoot this 
further?

Thx!

Regards,Ziga Zvan

Backup from linux hosts (on 05-dec 3 hosts failed, 20 hosts completed 
without error):
05-Dec 03:26 bacula-dir JobId 1721: Fatal error: Network error with FD 
during Backup: ERR=Connection reset by peer
05-Dec 03:27 bacula-dir JobId 1721: Fatal 

[Bacula-users] HELP - Progressive Virtual Full

2020-12-21 Thread Fábio Pinto

Hello,

I'm trying to setup a Progressive Virtual Full backup on a full HDD 
system. From what the logs tell me, the job starts but the sd doesn't 
let it go throught. I will put the log, bacula-sd and bacula-dir bellow. 
I changed the hostnames and passwords for security reasons.


*Here's the log from the PVF job:*
bacula-dir JobId 450: Bacula bacula-dir 9.6.4 (08Jun20):
  Build OS:   x86_64-pc-linux-gnu ubuntu 20.04
  JobId:  450
  Job:    Teste_2.2020-12-21_13.58.13_58
  Backup Level:   Virtual Full
  Client: "myClient" 9.4.2 (04Feb19) 
x86_64-pc-linux-gnu,ubuntu,20.04

  FileSet:    "myFiles - Linux" 2020-12-07 20:17:23
  Pool:   "Virtual Full - Remoto" (From Job resource)
  Catalog:    "MyCatalog" (From Client resource)
  Storage:    "myStorage" (From Job resource)
  Scheduled time: 21-Dec-2020 13:58:13
  Start time: 17-Dec-2020 20:22:35
  End time:   17-Dec-2020 20:22:35
  Elapsed time:   1 sec
  Priority:   10
  SD Files Written:   0
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Volume name(s):
  Volume Session Id:  4
  Volume Session Time:    1608237853
  Last Volume Bytes:  0 (0 B)
  SD Errors:  0
  SD termination status:
  Termination:    Backup Canceled
bacula-dir JobId 450: Fatal error:
 Storage daemon didn't accept Device "ubuntu2004-Dev1" command.
bacula-dir JobId 450: Found 24 files to consolidate into Virtual Full.
bacula-dir JobId 450: Consolidating JobIds=427,428,429,431,432
bacula-dir JobId 450: Warning: This Job is not an Accurate backup so is 
not equivalent to a Full backup.
bacula-dir JobId 450: Start Virtual Backup JobId 450, 
Job=Teste_2.2020-12-21_13.58.13_58



*Here's my sd config
*

Storage { # definition of myself
  Name = linuxenvtest2-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = "/var/lib/bacula"
  Pid Directory = "/run/bacula"
  Plugin Directory = "/usr/lib/bacula"
  Maximum Concurrent Jobs = 20
  #SDAddress = 127.0.0.1
}

Director {
  Name = bacula-dir
  Password = "myPassword"
}

Director {
  Name = linuxenvtest2-mon
  Password = "myPassword"
  Monitor = yes
}

#Autochanger {
#  Name = LinuxEnv
#  Device = LinuxEnv-Dev1, LinuxEnv-Dev2
#  Changer Command = ""
#  Changer Device = /dev/sg0
#}

Device {
  Name = LinuxEnv-Dev1
  Media Type = File1
  Archive Device = /tmp
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Device {
  Name = LinuxEnv-Dev2
  Media Type = File1
  Archive Device = /tmp
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}
Messages {
  Name = Standard
  director = linuxenvtest2-dir = all
}
*
**Here are my dir configs*

**Director {
  Name = "bacula-dir"
  Messages = "Daemon"
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = "/etc/bacula/working"
  PidDirectory = "/var/run"
  MaximumConcurrentJobs = 20
  Password = "myPassword"
}
Client {
  Name = "bacula-fd"
  Address = "myIP"
  FdPort = 9102
  Password = "bCnK3Z9AxYO9Q0MiXhxcgz930rA67sqUAti+cVhcTUTv"
  Catalog = "MyCatalog"
  FileRetention = 5184000
  JobRetention = 15552000
  AutoPrune = yes
}
Client {
  Name = "linuxenvtest2-fd"
  Address = "myIP"
  Password = "myPassword"
  Catalog = "MyCatalog"
  MaximumConcurrentJobs = 10
}
Client {
  Name = "netbrapc22-fd"
  Address = "myIP"
  Password = "myPassword"
  Catalog = "MyCatalog"
}
Job {
  Name = "RestoreFiles"
  Type = "Restore"
  Messages = "Standard"
  Storage = "File1"
  Pool = "File"
  Client = "bacula-fd"
  Fileset = "Full Set"
  Where = "/tmp/bacula-restores"
}
Job {
  Name = "Teste 2"
  Type = "Backup"
  Level = "Incremental"
  Messages = "Standard"
  Storage = "File1"
  Pool = "Incremental - Local"
  NextPool = "Virtual Full - Remoto"
  Client = "linuxenvtest2-fd"
  Fileset = "myFiles - Linux"
  MaximumConcurrentJobs = 10
  BackupsToKeep = 2
  DeleteConsolidatedJobs = yes
}
Storage {
  Name = "File1"
  SdPort = 9103
  Address = "myIP"
  Password = "myPassword"
  Device = "ubuntu2004-Dev1"
  MediaType = "File1"
  Autochanger = "File1"
  MaximumConcurrentJobs = 10
}
Storage {
  Name = "File2"
  SdPort = 9103
  Address = "myIP"
  Password = "myPassword"
  Device = "ubuntu2004-Dev2"
  MediaType = "File1"
  Autochanger = "File2"
  MaximumConcurrentJobs = 10
}
Storage {
  Name = "linuxenvtest2-sd"
  Address = "myIP"
  Password = "myPassword"
  Device = "LinuxEnv-Dev2"
  MediaType = "File1"
  Autochanger = "linuxenvtest2-sd"
  MaximumConcurrentJobs = 10
  MaximumConcurrentReadjobs = 10
}
Catalog {
  Name = "MyCatalog"
  Password = 

Re: [Bacula-users] Data loss with Hard Links

2020-12-21 Thread Martin Simmons
> On Fri, 18 Dec 2020 16:57:56 +0100, Eric Bollengier via Bacula-users said:
> 
> Hello,
> 
> On 2020-12-18 15:16, Andrea Venturoli wrote:
> > On 12/15/20 6:49 PM, Martin Simmons wrote:
> > 
> >> What is the fileset definition?
> > 
> > Quite huge, both in terms of size (a full backup is around 400GB) and in
> > terms of lines (more than 40 ZFS datasets).
> > I'm pasting here the relevant parts:
> > 
> > FileSet {
> >   Name="Xx"
> >   Include {
> >     Options {
> >   signature = MD5
> >   Accurate=yes
> 
> Here you have a mistake (maybe not so big) in your fileset, the Accurate
> directive you want to use is inside the Job definition. At the FileSet
> Options level, this is used to control which parameter you want to use
> to control if a file should be backed up or not. It is something like
> "pins" by default ("yes" or "no" have no meaning). New 11.0 version
> should control this field and report an error.

Haha, nice one!


>You should probably also
> have the directive like hardlinks=yes, else, they are not controlled.

That gave me a moment of worry about my backups, but luckily hardlinks
defaults to yes.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] deb and rpm Bacula 9.6.7 ????

2020-12-21 Thread Davide Franco
Hi,

Bacula 9.6.7 packages will ne available mater today.

Best regards

On Sun, 20 Dec 2020 at 14:45, Jose Alberto  wrote:

> Hi.
> Will they update the binaries soon?
>
> https://www.bacula.org/packages/
>
> Saludos.
>
>
> --
> #
> #   Sistema Operativo: Debian  #
> #Caracas, Venezuela  #
> #
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users