Re: [Bacula-users] backing up cifs-shares

2024-02-27 Thread Stefan G. Weichinger

Am 27.02.24 um 12:29 schrieb Marcin Haba:

Hello Stefan,

After adding the www-data to the bacula group you need to restart
php-fpm and web server services.

Here you can find more information about possible ways to solve this error:

https://bacularis.app/doc/brief/troubleshooting.html#permission-denied-error-when-saving-bacula-configuration


"chmod 755 /opt/bacula/etc" was missing. Works now, thanks!



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-27 Thread Marcin Haba
Hello Stefan,

After adding the www-data to the bacula group you need to restart
php-fpm and web server services.

Here you can find more information about possible ways to solve this error:

https://bacularis.app/doc/brief/troubleshooting.html#permission-denied-error-when-saving-bacula-configuration

Good luck!

Best regards,
Marcin Haba (gani)

On Tue, 27 Feb 2024 at 12:20, Stefan G. Weichinger  wrote:
>
>
> It seems to have worked now ... for the first windows-client.
>
> I am in the process of removing the older release and baculum.
>
> Earlier config was in "/etc/bacula", now the path seems to be
> "/opt/bacula/etc"
>
> I moved the configs, adjusted paths ... also in the API Panel.
>
> Things are *read* ok, but now I can't edit configs -> permission errors
>
> -
>
> /opt/bacula# ls -l etc/
> insgesamt 28
> -rw-rw-r-- 1 bacula bacula 7434 27. Feb 12:11 bacula-dir.conf
> -rw-rw-r-- 1 bacula bacula  494 25. Feb 17:36 bacula-fd.conf
> -rwxrw-rw- 1 bacula bacula 1136 24. Feb 17:21 bacula-fd.conf.dist
> -rw-rw-r-- 1 bacula bacula  952 26. Feb 09:18 bacula-sd.conf
> -rw-rw-r-- 1 bacula bacula  270 25. Feb 17:36 bconsole.conf
> -rwxrw-rw- 1 bacula bacula  265 24. Feb 17:21 bconsole.conf.dist
>
> The sudo-conf:
>
> # cat /etc/sudoers.d/bacularis-api
> Defaults:www-data !requiretty
> www-data ALL = (root) NOPASSWD: /usr/bin/bconsole
>
> www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bconsole
>
> www-data ALL = (root) NOPASSWD: /opt/bacula/scripts/mtx-changer
>
> www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bdirjson
> www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bsdjson
> www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bfdjson
> www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bbconsjson
>
> I added "www-data" to the group "bacula" ... (Debian 12.5 here)
>
>
> When I edit a Fileset and press "Save" ->
>
>   Error Error 1000: Internal error. [Warning]
> file_put_contents(/opt/bacula/etc/bacula-dir.conf): Failed to open
> stream: Permission denied (@line 64 in file
> /usr/share/bacularis/protected/vendor/bacularis/bacularis-common/Common/Modules/ConfigBacula.php).
>
> oh my
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
"Greater love hath no man than this, that a man lay down his life for
his friends." Jesus Christ

"Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
za przyjaciół swoich." Jezus Chrystus


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-27 Thread Stefan G. Weichinger



It seems to have worked now ... for the first windows-client.

I am in the process of removing the older release and baculum.

Earlier config was in "/etc/bacula", now the path seems to be 
"/opt/bacula/etc"


I moved the configs, adjusted paths ... also in the API Panel.

Things are *read* ok, but now I can't edit configs -> permission errors

-

/opt/bacula# ls -l etc/
insgesamt 28
-rw-rw-r-- 1 bacula bacula 7434 27. Feb 12:11 bacula-dir.conf
-rw-rw-r-- 1 bacula bacula  494 25. Feb 17:36 bacula-fd.conf
-rwxrw-rw- 1 bacula bacula 1136 24. Feb 17:21 bacula-fd.conf.dist
-rw-rw-r-- 1 bacula bacula  952 26. Feb 09:18 bacula-sd.conf
-rw-rw-r-- 1 bacula bacula  270 25. Feb 17:36 bconsole.conf
-rwxrw-rw- 1 bacula bacula  265 24. Feb 17:21 bconsole.conf.dist

The sudo-conf:

# cat /etc/sudoers.d/bacularis-api
Defaults:www-data !requiretty
www-data ALL = (root) NOPASSWD: /usr/bin/bconsole

www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bconsole

www-data ALL = (root) NOPASSWD: /opt/bacula/scripts/mtx-changer

www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bdirjson
www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bsdjson
www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bfdjson
www-data ALL = (root) NOPASSWD: /opt/bacula/bin/bbconsjson

I added "www-data" to the group "bacula" ... (Debian 12.5 here)


When I edit a Fileset and press "Save" ->

 Error Error 1000: Internal error. [Warning] 
file_put_contents(/opt/bacula/etc/bacula-dir.conf): Failed to open 
stream: Permission denied (@line 64 in file 
/usr/share/bacularis/protected/vendor/bacularis/bacularis-common/Common/Modules/ConfigBacula.php).


oh my



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-26 Thread Rob Gerber
In bacularis, when you go to volumes does the tape show as "inchanger=
yes"? If not, you need to do a scan of the changer inventory slots
(possible there in bacularis on volumes page). I forget what the button
says but it's at the top of volumes page. I select our changer and specify
to scan slots 1-24. You'd probably select your changer and type 1-8 for
which slots to scan since iirc you have an 8 bay changer.

Also in bacularis, what does the does the job itself say after running job?
Does job hang and wait forever saying "please load a tape" (i am
paraphrasing the message it will give)? Or does job fail automatically and
end the job running state?



Robert Gerber
402-237-8692
r...@craeon.net

On Mon, Feb 26, 2024, 11:48 AM Stefan G. Weichinger  wrote:

> Am 26.02.24 um 14:25 schrieb Rob Gerber:
> > Mixing topics is ok. Better in this case. All the information is in one
> > email thread.
>
> fine
>
> > I have read that if you use mt to rewind the tape and then write eof to
> > the tape (end of file), it will fool bacula into thinking that the tape
> > is empty. If you want to wipe the bacula catalog in your postgres
> > server, there should be scripts that do this in /opt/bacula/scripts
> > (default location, maybe different on debian).
> >
> > The Mt command is something like
> > mt -f /dev/yourtapedrive rewind
> > mt -f /dev/yourtapedrive -write eof
> > Check the command syntax I only vaguely remember them.
>
> Used that already before, yes, thanks
>
> > If not clearing bacula database prob need to purge volumes from catalog
> > (or maybe delete). Remember, these commands are dangerous and have sharp
> > edges. After using the Mt command may need to relabel tape.
>
> currently a job seems to hang:
>
> 26-Feb 18:33 debian1-sd JobId 34: Warning: mount.c:216 Open of Tape
> device "HP-Ultrium4" (/dev/nst0) Volume "Vol07" failed:
> ERR=tape_dev.c:170 Unable to open device "HP-Ultrium4" (/dev/nst0):
> ERR=Kein Medium gefunden
>
> 26-Feb 18:43 debian1-sd JobId 34: Warning: mount.c:216 Open of Tape
> device "HP-Ultrium4" (/dev/nst0) Volume "Vol07" failed:
> ERR=tape_dev.c:170 Unable to open device "HP-Ultrium4" (/dev/nst0):
> ERR=Kein Medium gefunden
>
> I don't know why. Tests with "btape" are completely fine.
>
> That Volume seems not to exist on tape but somewhere in the DB or so.
>
> I would expect the software to skip that and look for another usable tape.
>
> Is there a check routine possible *before* starting the job?
>
> It dumps ~120GB data into the spooling area, then detects that the tape
> isn't there and hangs ... when I stop the job, the spooled data is deleted.
>
> How can I fix that? thanks!
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-26 Thread Stefan G. Weichinger

Am 26.02.24 um 14:25 schrieb Rob Gerber:
Mixing topics is ok. Better in this case. All the information is in one 
email thread.


fine

I have read that if you use mt to rewind the tape and then write eof to 
the tape (end of file), it will fool bacula into thinking that the tape 
is empty. If you want to wipe the bacula catalog in your postgres 
server, there should be scripts that do this in /opt/bacula/scripts 
(default location, maybe different on debian).


The Mt command is something like
mt -f /dev/yourtapedrive rewind
mt -f /dev/yourtapedrive -write eof
Check the command syntax I only vaguely remember them.


Used that already before, yes, thanks

If not clearing bacula database prob need to purge volumes from catalog 
(or maybe delete). Remember, these commands are dangerous and have sharp 
edges. After using the Mt command may need to relabel tape.


currently a job seems to hang:

26-Feb 18:33 debian1-sd JobId 34: Warning: mount.c:216 Open of Tape 
device "HP-Ultrium4" (/dev/nst0) Volume "Vol07" failed: 
ERR=tape_dev.c:170 Unable to open device "HP-Ultrium4" (/dev/nst0): 
ERR=Kein Medium gefunden


26-Feb 18:43 debian1-sd JobId 34: Warning: mount.c:216 Open of Tape 
device "HP-Ultrium4" (/dev/nst0) Volume "Vol07" failed: 
ERR=tape_dev.c:170 Unable to open device "HP-Ultrium4" (/dev/nst0): 
ERR=Kein Medium gefunden


I don't know why. Tests with "btape" are completely fine.

That Volume seems not to exist on tape but somewhere in the DB or so.

I would expect the software to skip that and look for another usable tape.

Is there a check routine possible *before* starting the job?

It dumps ~120GB data into the spooling area, then detects that the tape 
isn't there and hangs ... when I stop the job, the spooled data is deleted.


How can I fix that? thanks!



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-26 Thread Rob Gerber
Mixing topics is ok. Better in this case. All the information is in one
email thread.

26-Feb 12:41 debian1-sd JobId 31: Warning: For Volume "08L4":
The number of files mismatch! Volume=4 Catalog=3
Correcting Catalog
This message is fairly normal if you restore your catalog backup and lose
some information on backups that exist. There is catalog information
encoded on disk with each backup.

26-Feb 12:41 debian1-sd JobId 31: Error: Backspace record at EOT failed.
I have not seen this error before, but I am not extremely experienced with
Bacula. I suspect your theory is correct.

Warning: destructive / dangerous process ahead.

I have read that if you use mt to rewind the tape and then write eof to the
tape (end of file), it will fool bacula into thinking that the tape is
empty. If you want to wipe the bacula catalog in your postgres server,
there should be scripts that do this in /opt/bacula/scripts (default
location, maybe different on debian).

The Mt command is something like
mt -f /dev/yourtapedrive rewind
mt -f /dev/yourtapedrive -write eof
Check the command syntax I only vaguely remember them.

If not clearing bacula database prob need to purge volumes from catalog (or
maybe delete). Remember, these commands are dangerous and have sharp edges.
After using the Mt command may need to relabel tape.

Robert Gerber
402-237-8692
r...@craeon.net

On Mon, Feb 26, 2024, 5:49 AM Stefan G. Weichinger  wrote:

> Am 24.02.24 um 20:53 schrieb Stefan G. Weichinger:
>
> > I was already able to get the windows-client to work by
> > upgrading/reinstalling the bacula-server part (basically starting from
> > scratch ... didn't matter much, but was a bit of work).
> >
> > The server was older than the client, that seems to have lead to the
> > mentioned error.
> >
> > I also took the chance to get rid of baculum and install bacularis.
>
> The windows-client part seems to work now, but by "losing" my
> postgres-DB/catalog in the process of upgrading/reinstalling I have
> issues with the tapes/volumes.
>
> Is there a fast way to start from scratch? I re-labelled volumes,
> updated slots, edited tapes to status "Append" etc, but Bacula seems
> confused by the tapes ;-)
>
> Stuff like this right now:
>
>
>
> 26-Feb 12:39 debian1-sd JobId 31: Volume "08L4" previously written,
> moving to end of data.
> 26-Feb 12:41 debian1-sd JobId 31: Warning: For Volume "08L4":
> The number of files mismatch! Volume=4 Catalog=3
> Correcting Catalog
> 26-Feb 12:41 debian1-sd JobId 31: New volume "08L4" mounted on
> device "HP-Ultrium4" (/dev/nst0) at 26-Feb-2024 12:41.
> 26-Feb 12:41 debian1-sd JobId 31: Error: Backspace record at EOT failed.
> ERR=Eingabe-/Ausgabefehler
> 26-Feb 12:41 debian1-sd JobId 31: End of medium on Volume "08L4"
> Bytes=53,687,079,936 Blocks=0 at 26-Feb-2024 12:41.
> 26-Feb 12:41 debian1-sd JobId 31: 3307 Issuing autochanger "unload
> Volume 08L4, Slot 2, Drive 0" command.
> 26-Feb 12:45 debian1-sd JobId 31: 3304 Issuing autochanger "load Volume
> 09L4, Slot 3, Drive 0" command.
> 26-Feb 12:46 debian1-sd JobId 31: 3305 Autochanger "load Volume
> 09L4, Slot 3, Drive 0", status is OK.
> 26-Feb 12:46 debian1-sd JobId 31: Volume "09L4" previously written,
> moving to end of data.
>
> I have to add that I edited MaximumFileSize from 20GB to 2GB after
> reading the info. Maybe the tapes I wrote with the larger value aren't
> "compatible" now anymore?
>
> Sorry for mixing topics and asking FAQs ... still learning many details.
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-26 Thread Stefan G. Weichinger

Am 24.02.24 um 20:53 schrieb Stefan G. Weichinger:

I was already able to get the windows-client to work by 
upgrading/reinstalling the bacula-server part (basically starting from 
scratch ... didn't matter much, but was a bit of work).


The server was older than the client, that seems to have lead to the 
mentioned error.


I also took the chance to get rid of baculum and install bacularis.


The windows-client part seems to work now, but by "losing" my 
postgres-DB/catalog in the process of upgrading/reinstalling I have 
issues with the tapes/volumes.


Is there a fast way to start from scratch? I re-labelled volumes, 
updated slots, edited tapes to status "Append" etc, but Bacula seems 
confused by the tapes ;-)


Stuff like this right now:



26-Feb 12:39 debian1-sd JobId 31: Volume "08L4" previously written, 
moving to end of data.

26-Feb 12:41 debian1-sd JobId 31: Warning: For Volume "08L4":
The number of files mismatch! Volume=4 Catalog=3
Correcting Catalog
26-Feb 12:41 debian1-sd JobId 31: New volume "08L4" mounted on 
device "HP-Ultrium4" (/dev/nst0) at 26-Feb-2024 12:41.
26-Feb 12:41 debian1-sd JobId 31: Error: Backspace record at EOT failed. 
ERR=Eingabe-/Ausgabefehler
26-Feb 12:41 debian1-sd JobId 31: End of medium on Volume "08L4" 
Bytes=53,687,079,936 Blocks=0 at 26-Feb-2024 12:41.
26-Feb 12:41 debian1-sd JobId 31: 3307 Issuing autochanger "unload 
Volume 08L4, Slot 2, Drive 0" command.
26-Feb 12:45 debian1-sd JobId 31: 3304 Issuing autochanger "load Volume 
09L4, Slot 3, Drive 0" command.
26-Feb 12:46 debian1-sd JobId 31: 3305 Autochanger "load Volume 
09L4, Slot 3, Drive 0", status is OK.
26-Feb 12:46 debian1-sd JobId 31: Volume "09L4" previously written, 
moving to end of data.


I have to add that I edited MaximumFileSize from 20GB to 2GB after 
reading the info. Maybe the tapes I wrote with the larger value aren't 
"compatible" now anymore?


Sorry for mixing topics and asking FAQs ... still learning many details.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-24 Thread Stefan G. Weichinger

Am 24.02.24 um 03:07 schrieb Rob Gerber:

Hello Stefan!

I have a server that I back up via bacula that I can only access via 
SMB. It is a high-end NAS appliance running gentoo linux, with 
absolutely no shell access by anyone but the vendor who provides it. The 
appliance does its job well, but I have no ability to run an FD on it, 
nor should I since it is a highly tuned and customized environment. As 
such, I back it up via SMB to LTO-8. This works fairly well.


[..]

Rob, thanks for that long and detailed reply. It will take me some time 
to work through it and understand all the details.


I was already able to get the windows-client to work by 
upgrading/reinstalling the bacula-server part (basically starting from 
scratch ... didn't matter much, but was a bit of work).


The server was older than the client, that seems to have lead to the 
mentioned error.


I also took the chance to get rid of baculum and install bacularis.

Looks quite good already, the first job is running.

thanks so far!




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up cifs-shares

2024-02-23 Thread Rob Gerber
Hello Stefan!

I have a server that I back up via bacula that I can only access via SMB.
It is a high-end NAS appliance running gentoo linux, with absolutely no
shell access by anyone but the vendor who provides it. The appliance does
its job well, but I have no ability to run an FD on it, nor should I since
it is a highly tuned and customized environment. As such, I back it up via
SMB to LTO-8. This works fairly well.

I will emphasize that best practice as I understand it is to use a local
bacula FD whenever possible. So the windows FD you located may be a good
choice. However, you'll have to experiment with it as I have no experience
with windows FD software.

I have not found any issue backing up multiple shares in one job/fileset
definition. Ultimately I found that all shares in one job was best for my
case. Keep in mind for ongoing backups that if some data is static and
doesn't change much and other data changes very frequently, you might want
to back those different datasets up in different jobs so the large static
body of data isn't frequently written to tape again and again as part of
full jobs, when you could be running rare incrementals vs the large body of
static data and frequent fulls, differentials, and incrementals against
frequently changing data. Might not apply to your case, but I believe that
reasons to split share data among different jobs would primarily be for
other reasons besides bacula limitations. Bacula can back up the shares as
part of a single fileset just fine.

I did find that when backing up "large static dataset that doesn't change
often" separately from "small dataset that changes often" was slightly more
inconvenient when doing general data restores, because I needed to restore
data from both jobs/filesets/pools and the files from the small active
dataset weren't included in the large static dataset. I wound up adding the
shares for the small dataset to the fileset for the large dataset, but
didn't increase the backup frequency for the large dataset backup job. This
wouldn't be suitable for routine backups of the small dataset, but because
it is so small it really cost me nothing to back it up twice, routinely and
often in the job/fileset/pool for the small active dataset, and
occasionally for the large, static, and mostly unchanging dataset. Small
tweak, but convenient.

Good job using a mountpoint script! without that, bacula will probably
happily back up /mnt/s01/sharename even if unmounted, report "nothing to
back up! mission accomplished!" and exit 0. This sort of thing is one point
in favor of running a local windows fd (if they work well - i don't know
any details about them).

regarding fileset elegance, for includes you could simply use  a single
line: /mnt/bacula/s01/
After all, this folder contains all the mountpoints and should be
sufficient. Bacula will be backing up with the perspective that the file
paths all start with /mnt/bacula/s01/foo/bar so specifying individual
shares when every share in /mnt/bacula/s01/ is a backup target isn't really
necessary. This only gains you 2 lines. The excludes are much longer, and
I'm not sure there's a way to make them more elegant. You need to exclude
those items, after all.

You should be aware that even if not defined, File, Job, and Volume catalog
records all have default retention periods. If you back data up and don't
define those periods, a retention period will be enforced for you. If the
job records are pruned for a volume, the volume will be automatically
pruned as well. As such, be aware and define those retention periods as you
deem appropriate.

Practice restoring your bacula catalog now. In my general experience, the
restore process is fairly straightforward. You'll need to restore the file
from the backup job defined with the default install. It'll give you a
bacula.sql file. assuming you're using postgres sql, you'll have to run
something like 'psql bacula < bacula.sql'. My command syntax or even the
command used could be inaccurate. VERIFY EVERYTHING, I'm only typing this
from memory. I do recall that the bacula.sql file appeared to contain
everything needed to drop the existing postgres tables, create new ones,
and import all the relevant data.

Know that the bconsole purge and delete commands are DANGEROUS. They tell
you that, what they don't tell you is that there isn't much in the way of
confirmation before they go ahead and delete all your records for poorly
formatted / misunderstood command entries. The same level of care given to
the design when restoring or backing up files wasn't used when designing
the purge command at minimum. I expected some level of confirmation before
it did its business, but two levels in it happily announced "ok! I deleted
all the records associated with the FD you selected!" I was shocked. I was
also unharmed in the end because I knew how to restore my frequently backed
up bacula database.

The purge command is dangerous not just because of what it does (remove
catalog 

Re: [Bacula-users] backing up cifs-shares

2024-02-23 Thread Stefan G. Weichinger

Am 23.02.24 um 09:50 schrieb Stefan G. Weichinger:


I am still learning my way to use bacula and could need some explanations.


In the meantime I learned there is a Windows client ;-)


installed it, added client definition to the server, edited client conf ...

I see the client and its status on the server

When I start a job I get this ->



 Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Waiting on FD
  Termination:*** Backup Error ***
debian1-dir JobId 140: Fatal error: No Job status returned from FD.
debian1-dir JobId 140: Fatal error: Bad response to Storage command: 
wanted 2000 OK storage
, got 2800 End Job TermCode=102 JobFiles=0 ReadBytes=0 JobBytes=0 
Errors=1 VSS=1 Encrypt=0 CommBytes=79 CompressCommBytes=79
s01-fd JobId 140: Fatal error: filed/hello.c:191 Bad caps from SD: auth 
cram-md5 <133970927.1708705086@debian1-sd> ssl=0

.
debian1-dir JobId 140: Using Device "HP-Ultrium4" to write.
debian1-dir JobId 140: Start Backup JobId 140, 
Job=s01-windows-client.2024-02-23_17.18.03_47



Is that a mismatch? Server not using SSL, client trying to?

tia


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backing up cifs-shares

2024-02-23 Thread Stefan G. Weichinger



I am still learning my way to use bacula and could need some explanations.

One goal of my customer is to backup an old Windows Server VM with ~15 
shares.


My bacula-server is a Debian-VM with bacula-13.0.3, and baculum-11.0.6

I have a config running, writing to a HP changer with 8 tapes etc

My current approach:

I have a JobDef for that server, with pre/post-scripts to mount the 
share "C$" (kind of a catchall-approach for the start):



JobDefs {
  Name = "Server_S01"
  Type = "Backup"
  Level = "Incremental"
  Messages = "Standard"
  Storage = "loader1"
  Pool = "Default"
  Client = "debian1-fd"
  Fileset = "S01_Set1"
  Schedule = "WeeklyCycle"
  WriteBootstrap = "/var/lib/bacula/%c.bsr"
  SpoolAttributes = yes
  Runscript {
RunsWhen = "Before"
RunsOnClient = no
Command = "/etc/bacula/scripts/cifs_mount_s01.sh"
  }
  Runscript {
RunsWhen = "After"
RunsOnClient = no
Command = "/usr/bin/umount /mnt/bacula/s01/c_dollar"
  }
  Priority = 10
}

# fstab

//192.168.0.11/C$  /mnt/bacula/s01/c_dollar cifs 
ro,_netdev,users,noauto,credentials=/var/lib/bacula/.smbcreds_s01 0 0


# scripts/cifs_mount_s01.sh

/usr/bin/mountpoint -q /mnt/bacula/s01/c_dollar || /usr/sbin/mount.cifs 
//192.168.x.y/C$ /mnt/bacula/s01/c_dollar  -o 
credentials=/var/lib/bacula/.smbcreds_s01



A Fileset, that doesn't look very elegant to me. I edited it for privacy 
.. you get the picture:


Fileset {
  Name = "S01_Set1"
  Include {
File = "/mnt/bacula/s01/c_dollar/A"
File = "/mnt/bacula/s01/c_dollar/B"
File = "/mnt/bacula/s01/c_dollar/C"
Options {
  Signature = "Md5"
}
  }
  Exclude {
File = "/mnt/bacula/s01/c_dollar/Backu*"
File = "/mnt/bacula/s01/c_dollar/Dokumente*"
File = "/mnt/bacula/s01/c_dollar/pagefile.sys"
File = "/mnt/bacula/s01/c_dollar/Prog*"
File = "/mnt/bacula/s01/c_dollar/Reco*"
File = "/mnt/bacula/s01/c_dollar/System*"
File = "/mnt/bacula/s01/c_dollar/Windows"
File = "/mnt/bacula/s01/c_dollar/WSUS"
  }
}

--

Is that OK or is there a more elegant way to do this?

The Job runs right now, and copies files, OK

My CIFS-user should have admin rights, but for example I seem not to 
have read permissions when doing this:


# ls /mnt/bacula/s01/c_dollar/A

I let the job finish and check contents of backups later in the GUI.

Sure, that's more of a Samba/CIFS-question -> permissions of users. 
Maybe we should add my user to the various shares as a read-user via 
ACLs or so.


Being member of admins seems not enough.

-

I'd appreciate suggestions how to backup multiple CIFS-shares.

One job per share? I would need pre/post-scripts for each of them?

thanks in advance! Stefan







___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users