Re: [Bacula-users] backing up cifs-shares

2024-02-23 Thread Rob Gerber
Hello Stefan!

I have a server that I back up via bacula that I can only access via SMB.
It is a high-end NAS appliance running gentoo linux, with absolutely no
shell access by anyone but the vendor who provides it. The appliance does
its job well, but I have no ability to run an FD on it, nor should I since
it is a highly tuned and customized environment. As such, I back it up via
SMB to LTO-8. This works fairly well.

I will emphasize that best practice as I understand it is to use a local
bacula FD whenever possible. So the windows FD you located may be a good
choice. However, you'll have to experiment with it as I have no experience
with windows FD software.

I have not found any issue backing up multiple shares in one job/fileset
definition. Ultimately I found that all shares in one job was best for my
case. Keep in mind for ongoing backups that if some data is static and
doesn't change much and other data changes very frequently, you might want
to back those different datasets up in different jobs so the large static
body of data isn't frequently written to tape again and again as part of
full jobs, when you could be running rare incrementals vs the large body of
static data and frequent fulls, differentials, and incrementals against
frequently changing data. Might not apply to your case, but I believe that
reasons to split share data among different jobs would primarily be for
other reasons besides bacula limitations. Bacula can back up the shares as
part of a single fileset just fine.

I did find that when backing up "large static dataset that doesn't change
often" separately from "small dataset that changes often" was slightly more
inconvenient when doing general data restores, because I needed to restore
data from both jobs/filesets/pools and the files from the small active
dataset weren't included in the large static dataset. I wound up adding the
shares for the small dataset to the fileset for the large dataset, but
didn't increase the backup frequency for the large dataset backup job. This
wouldn't be suitable for routine backups of the small dataset, but because
it is so small it really cost me nothing to back it up twice, routinely and
often in the job/fileset/pool for the small active dataset, and
occasionally for the large, static, and mostly unchanging dataset. Small
tweak, but convenient.

Good job using a mountpoint script! without that, bacula will probably
happily back up /mnt/s01/sharename even if unmounted, report "nothing to
back up! mission accomplished!" and exit 0. This sort of thing is one point
in favor of running a local windows fd (if they work well - i don't know
any details about them).

regarding fileset elegance, for includes you could simply use  a single
line: /mnt/bacula/s01/
After all, this folder contains all the mountpoints and should be
sufficient. Bacula will be backing up with the perspective that the file
paths all start with /mnt/bacula/s01/foo/bar so specifying individual
shares when every share in /mnt/bacula/s01/ is a backup target isn't really
necessary. This only gains you 2 lines. The excludes are much longer, and
I'm not sure there's a way to make them more elegant. You need to exclude
those items, after all.

You should be aware that even if not defined, File, Job, and Volume catalog
records all have default retention periods. If you back data up and don't
define those periods, a retention period will be enforced for you. If the
job records are pruned for a volume, the volume will be automatically
pruned as well. As such, be aware and define those retention periods as you
deem appropriate.

Practice restoring your bacula catalog now. In my general experience, the
restore process is fairly straightforward. You'll need to restore the file
from the backup job defined with the default install. It'll give you a
bacula.sql file. assuming you're using postgres sql, you'll have to run
something like 'psql bacula < bacula.sql'. My command syntax or even the
command used could be inaccurate. VERIFY EVERYTHING, I'm only typing this
from memory. I do recall that the bacula.sql file appeared to contain
everything needed to drop the existing postgres tables, create new ones,
and import all the relevant data.

Know that the bconsole purge and delete commands are DANGEROUS. They tell
you that, what they don't tell you is that there isn't much in the way of
confirmation before they go ahead and delete all your records for poorly
formatted / misunderstood command entries. The same level of care given to
the design when restoring or backing up files wasn't used when designing
the purge command at minimum. I expected some level of confirmation before
it did its business, but two levels in it happily announced "ok! I deleted
all the records associated with the FD you selected!" I was shocked. I was
also unharmed in the end because I knew how to restore my frequently backed
up bacula database.

The purge command is dangerous not just because of what it does (remove
catalog 

Re: [Bacula-users] backing up cifs-shares

2024-02-23 Thread Stefan G. Weichinger

Am 23.02.24 um 09:50 schrieb Stefan G. Weichinger:


I am still learning my way to use bacula and could need some explanations.


In the meantime I learned there is a Windows client ;-)


installed it, added client definition to the server, edited client conf ...

I see the client and its status on the server

When I start a job I get this ->



 Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Waiting on FD
  Termination:*** Backup Error ***
debian1-dir JobId 140: Fatal error: No Job status returned from FD.
debian1-dir JobId 140: Fatal error: Bad response to Storage command: 
wanted 2000 OK storage
, got 2800 End Job TermCode=102 JobFiles=0 ReadBytes=0 JobBytes=0 
Errors=1 VSS=1 Encrypt=0 CommBytes=79 CompressCommBytes=79
s01-fd JobId 140: Fatal error: filed/hello.c:191 Bad caps from SD: auth 
cram-md5 <133970927.1708705086@debian1-sd> ssl=0

.
debian1-dir JobId 140: Using Device "HP-Ultrium4" to write.
debian1-dir JobId 140: Start Backup JobId 140, 
Job=s01-windows-client.2024-02-23_17.18.03_47



Is that a mismatch? Server not using SSL, client trying to?

tia


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backing up cifs-shares

2024-02-23 Thread Stefan G. Weichinger



I am still learning my way to use bacula and could need some explanations.

One goal of my customer is to backup an old Windows Server VM with ~15 
shares.


My bacula-server is a Debian-VM with bacula-13.0.3, and baculum-11.0.6

I have a config running, writing to a HP changer with 8 tapes etc

My current approach:

I have a JobDef for that server, with pre/post-scripts to mount the 
share "C$" (kind of a catchall-approach for the start):



JobDefs {
  Name = "Server_S01"
  Type = "Backup"
  Level = "Incremental"
  Messages = "Standard"
  Storage = "loader1"
  Pool = "Default"
  Client = "debian1-fd"
  Fileset = "S01_Set1"
  Schedule = "WeeklyCycle"
  WriteBootstrap = "/var/lib/bacula/%c.bsr"
  SpoolAttributes = yes
  Runscript {
RunsWhen = "Before"
RunsOnClient = no
Command = "/etc/bacula/scripts/cifs_mount_s01.sh"
  }
  Runscript {
RunsWhen = "After"
RunsOnClient = no
Command = "/usr/bin/umount /mnt/bacula/s01/c_dollar"
  }
  Priority = 10
}

# fstab

//192.168.0.11/C$  /mnt/bacula/s01/c_dollar cifs 
ro,_netdev,users,noauto,credentials=/var/lib/bacula/.smbcreds_s01 0 0


# scripts/cifs_mount_s01.sh

/usr/bin/mountpoint -q /mnt/bacula/s01/c_dollar || /usr/sbin/mount.cifs 
//192.168.x.y/C$ /mnt/bacula/s01/c_dollar  -o 
credentials=/var/lib/bacula/.smbcreds_s01



A Fileset, that doesn't look very elegant to me. I edited it for privacy 
.. you get the picture:


Fileset {
  Name = "S01_Set1"
  Include {
File = "/mnt/bacula/s01/c_dollar/A"
File = "/mnt/bacula/s01/c_dollar/B"
File = "/mnt/bacula/s01/c_dollar/C"
Options {
  Signature = "Md5"
}
  }
  Exclude {
File = "/mnt/bacula/s01/c_dollar/Backu*"
File = "/mnt/bacula/s01/c_dollar/Dokumente*"
File = "/mnt/bacula/s01/c_dollar/pagefile.sys"
File = "/mnt/bacula/s01/c_dollar/Prog*"
File = "/mnt/bacula/s01/c_dollar/Reco*"
File = "/mnt/bacula/s01/c_dollar/System*"
File = "/mnt/bacula/s01/c_dollar/Windows"
File = "/mnt/bacula/s01/c_dollar/WSUS"
  }
}

--

Is that OK or is there a more elegant way to do this?

The Job runs right now, and copies files, OK

My CIFS-user should have admin rights, but for example I seem not to 
have read permissions when doing this:


# ls /mnt/bacula/s01/c_dollar/A

I let the job finish and check contents of backups later in the GUI.

Sure, that's more of a Samba/CIFS-question -> permissions of users. 
Maybe we should add my user to the various shares as a read-user via 
ACLs or so.


Being member of admins seems not enough.

-

I'd appreciate suggestions how to backup multiple CIFS-shares.

One job per share? I would need pre/post-scripts for each of them?

thanks in advance! Stefan







___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users