[Bacula-users] Bacula 9.0.6 with Windows client 7.4.4, status error

2018-01-30 Thread Peter Milesson

Hi folks,

I'm installing a Bacula server 9.0.6 under CentOS 7.4. I have also 
purchased a binary license for backing up a couple of Windows servers 
with the Bacula client. The purchased Windows client version is 7.4.4.


When I run bconsole, status, Network, and select the Windows client 
(192.168.0.205), the report is:


Connecting to Storage -Sd at 192.168.0.103:9103
Connecting to Client -Fd at 192.168.0.205:9102
Running network test between Client=-Fd and Storage=-Sd with 
52.42 MB ...

2999 Invalid command

I also have got Bacula 9.0.6 installed on a CentOS 7.4 server, and 
running the same network status works without errors. Also running 
network status on the local machine reports no errors.


I have made a successful test backup of the Windows client, without any 
error messages.


As it's a small network with just a few servers, I prefer to use IP 
addresses instead of names. Using names and register them in the 
DNS-server doesn't change anything. The behavior is identical. Also, the 
appropriate ports are open in the firewalls, so turning off the 
firewalls under Windows and CentOS reps., makes no difference.


Is the error message due to compatibility issues between the Windows 
7.4.4 Bacula client and the 9.0.6 server, or is there some problem that 
will jump up and bite me in the future?


Best regards,

Peter

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 9.0.6 with Windows client 7.4.4, status error

2018-01-31 Thread Peter Milesson

Thanks Kern. Now I will sleep much better ;-)

Peter

On 2018-01-31 11:26, Kern Sibbald wrote:


The network status command does not exist on older File daemons, which 
is what the 2 Invalid command means.


Yes, you are right.  New features don't work on older software.

Best regards,

Kern


On 30.01.2018 12:08, Peter Milesson wrote:

Hi folks,

I'm installing a Bacula server 9.0.6 under CentOS 7.4. I have also 
purchased a binary license for backing up a couple of Windows servers 
with the Bacula client. The purchased Windows client version is 7.4.4.


When I run bconsole, status, Network, and select the Windows client 
(192.168.0.205), the report is:


Connecting to Storage -Sd at 192.168.0.103:9103
Connecting to Client -Fd at 192.168.0.205:9102
Running network test between Client=-Fd and Storage=-Sd with 
52.42 MB ...

2999 Invalid command

I also have got Bacula 9.0.6 installed on a CentOS 7.4 server, and 
running the same network status works without errors. Also running 
network status on the local machine reports no errors.


I have made a successful test backup of the Windows client, without 
any error messages.


As it's a small network with just a few servers, I prefer to use IP 
addresses instead of names. Using names and register them in the 
DNS-server doesn't change anything. The behavior is identical. Also, 
the appropriate ports are open in the firewalls, so turning off the 
firewalls under Windows and CentOS reps., makes no difference.


Is the error message due to compatibility issues between the Windows 
7.4.4 Bacula client and the 9.0.6 server, or is there some problem 
that will jump up and bite me in the future?


Best regards,

Peter



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org!http://sdm.link/slashdot


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for a LTO autoloader

2018-11-13 Thread Peter Milesson

Hi Markus,

I know my suggestion isn't concerning autoloaders, but it's a safe and 
working alternative to tapes (and all the hassles).


I setup a dedicated backup station with 13TB of RAID10 storage. It's 
placed in a different location from the servers, and connected to the 
network backbone with a 10Gbit/s link. It's running CentOS 7.5, and 
utilizing the mhvtl virtual tape library. It's working very smoothly, 
and reliably. The cost of building the rig, running, and administration 
is quite on the plus side compared to tapes (been there, done that).


Just my five cents.

Best regards,

Peter

On 13.11.2018 0:34, Markus Falb wrote:

Hello,
I am considering a purchase of a relatively small autoloader, but am 
not sure about compatibility with bacula.


I found this one:
https://www.overlandstorage.com/products/tape-libraries-and-autoloaders/neos-t24.aspx#Overview

If anyone on this list has experiences with this autoloader + bacula, 
please let me know.


I does know there is a "supported" list in the documentation
http://www.bacula.org/9.2.x-manuals/en/main/Supported_Autochangers.html
but don't know how comprehensive it is, there are some Overland models 
listed, but not this specific model (There doesn't exist a "not 
supported" list, does it? )


Also, suggestions (or dis-recommendations) for other autoloaders are 
appreciated, of course.


Best Regards, Markus

P.S. the Qualstar Q24 looks pretty identical to the NEOs T24
https://www.qualstar.com/qsr-q24.php
Does anyone know what the differences are?














___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for a LTO autoloader

2018-11-13 Thread Peter Milesson


Hi Josh,

The link already exists, connecting two different locations within the 
company. No extra cost there. If your company has got 2 buildings 
connected physically, it's a nice and efficient solution.


I have used tapes for many years with Bacula previously. It was working, 
but not very efficient.


Best regards,

Peter


On 13.11.2018 15:47, Josh Fisher wrote:



On 11/13/2018 5:38 AM, Peter Milesson wrote:

Hi Markus,

I know my suggestion isn't concerning autoloaders, but it's a safe 
and working alternative to tapes (and all the hassles).


I setup a dedicated backup station with 13TB of RAID10 storage. It's 
placed in a different location from the servers, and connected to the 
network backbone with a 10Gbit/s link. It's running CentOS 7.5, and 
utilizing the mhvtl virtual tape library. It's working very smoothly, 
and reliably. The cost of building the rig, running, and 
administration is quite on the plus side compared to tapes (been 
there, done that).



I cannot see how the cost is lower. A 10G link to another location is 
extremely expensive in my area. I see the same problem with using 
cloud storage, S3, etc. The cost of the mhvtl rig or S3 storage is not 
the issue, but rather the ongoing cost of the bandwidth required to 
transfer the data in a timely manner. It may be different elsewhere, 
but in my area the bandwidth is far more expensive than tapes.





Just my five cents.

Best regards,

Peter

On 13.11.2018 0:34, Markus Falb wrote:

Hello,
I am considering a purchase of a relatively small autoloader, but am 
not sure about compatibility with bacula.


I found this one:
https://www.overlandstorage.com/products/tape-libraries-and-autoloaders/neos-t24.aspx#Overview

If anyone on this list has experiences with this autoloader + 
bacula, please let me know.


I does know there is a "supported" list in the documentation
http://www.bacula.org/9.2.x-manuals/en/main/Supported_Autochangers.html
but don't know how comprehensive it is, there are some Overland 
models listed, but not this specific model (There doesn't exist a 
"not supported" list, does it? )


Also, suggestions (or dis-recommendations) for other autoloaders are 
appreciated, of course.


Best Regards, Markus

P.S. the Qualstar Q24 looks pretty identical to the NEOs T24
https://www.qualstar.com/qsr-q24.php
Does anyone know what the differences are?














___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for a LTO autoloader

2018-11-13 Thread Peter Milesson




On 13.11.2018 18:15, Dmitri Maziuk via Bacula-users wrote:

On Tue, 13 Nov 2018 09:47:32 -0500
Josh Fisher  wrote:


On 11/13/2018 5:38 AM, Peter Milesson wrote:

Hi Markus,

I know my suggestion isn't concerning autoloaders, but it's a safe and
working alternative to tapes (and all the hassles).

I setup a dedicated backup station with 13TB of RAID10 storage. It's
placed in a different location from the servers, and connected to the
network backbone with a 10Gbit/s link. It's running CentOS 7.5, and
utilizing the mhvtl virtual tape library. It's working very smoothly,
and reliably. The cost of building the rig, running, and
administration is quite on the plus side compared to tapes (been
there, done that).


I cannot see how the cost is lower. A 10G link to another location is
extremely expensive in my area. I see the same problem with using cloud
storage, S3, etc. The cost of the mhvtl rig or S3 storage is not the
issue, but rather the ongoing cost of the bandwidth required to transfer
the data in a timely manner. It may be different elsewhere, but in my
area the bandwidth is far more expensive than tapes.

Do you need a 10G link though? -- For the amounts I'm backing up here 1G works 
fine.

The other issue is RAID-10 and mhvtl: if you do disk backup, why go through the 
extra layer of mhvtl? And not use ZFS -- I just finished replacing 4TB drives 
w/ 8TB ones in our server, with no downtime.


Hi Dmitri,

Using a standard 1G link posed a bottleneck during the monthly full 
backup session of several servers. 10Gbit NICs are readily available and 
not very expensive. Measuring the throughput, it's around 2,5 Gbit/s 
when backing up Linux servers. For some reason Windows servers seem to 
be much slower (OS overhead?). Most of the data is on Linux servers, so 
it really paid off.


The backup station is a Linux based rig with CentOS 7.5. I'm not 
completely convinced about the ZFS support under CentOS. Don't want to 
experiment too much with valuable data. And the mhvtl is just to keep 
compatibility with tapes. I'm familiar with that, and more or less 
copied the configuration when I switched hardware. It works, and works 
well (don't touch something that works).


Best regards,

Peter


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for a LTO autoloader

2018-11-14 Thread Peter Milesson



On 14.11.2018 8:37, Radosław Korzeniewski wrote:

Hello,

wt., 13 lis 2018 o 18:20 Dmitri Maziuk via Bacula-users 
> napisał(a):



The other issue is RAID-10 and mhvtl: if you do disk backup, why
go through the extra layer of mhvtl?


Peter did not provide the answer to this question (or I miss 
something), but the only reasonable response would be when mhvtl 
export its resources as an iSCSI/FC tape autochanger and local Storage 
Daemon uses it as a "real" tape library. So in this case the local SD 
store data to remote mhvtl, so the data are protected on remote site. 
Any other solution (especially SD deployed on remote site saving data 
to local mhvtl) is a simple waste of resources (especially storage as 
Bacula never truncate unused tape volumes but can truncate unused file 
volumes) as you pointed out above.


best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Hi Radosław,

Just laziness. I had a working bacula configuration that served me well 
for many years. No reason to change a working setup.


Best regards,

Peter

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "ERR=database or disk is full" while building directory tree for restore

2018-12-31 Thread Peter Milesson

Hi folks,

Just for information about MySQL/MariaDB.

I'm using bacula 9.0.6 under CentOS 7.5 since almost a year, with no 
issues whatsoever. CentOS 7.5 is shipping with MariaDB 5.5.56. AFAIK, 
that version of MariaDB isn't exactly rocket science, and more or less 
compatible with comparable MySQL versions. I prefer a working setup over 
bleeding edge (unstable) performance in a backup scenario. If CentOS 
bumps MariaDB version to 10, and it will incur problems, I will not 
hesitate for a moment to switch to PostgreSQL.


Just my 5 cents...

I wish everybody a happy and problem free new year.

Peter


On 31.12.2018 20:24, Kern Sibbald wrote:

Hello,

Yes, MariaDB is supposed to be MySQL compatible, but it is not 
really.  The first time that I installed MariaDB over MySQL (with lots 
of problems -- best to remove MySQL and the DB first), I ran the 
Bacula regression tests and MariaDB failed with a false detection of a 
lock deadlock, which I reported.  It took awhile, but the project did 
fix the problem so it *might* now run Bacula with no problems.  That 
is to be seen.


The docker idea is very good.

Best regards,
Kern

On 12/27/18 8:32 AM, Adam Nielsen wrote:

I once tried MariaDB and found that it cannot be installed on the same
machine with MySQL unless you do a lot of tweaking at a very low level.
Currently I have both Postgres and MySQL installed on the same 
machines,

so supporting an additional DB is painful.

MariaDB is a fork of MySQL, maintained by the original MySQL
developers.  Oracle continues to maintain MySQL, but MariaDB is the
"spiritual" successor of MySQL.  "MySQL" was named after the creator's
first daughter, "My", and so since Oracle now owns this name the fork
has been named after his second daughter instead, "Maria".

The reason it is so hard to install MariaDB alongside MySQL is the same
as why it would be difficult to install Bacula 9.4 and 9.2 at the same
time - they are both different versions of the same project.


I expect that if MariaDB replaces all the MySQLs, eventually if no
community patches come for MariaDB I will be forced to add support for
it myself ...

MariaDB has already replaced MySQL in most Linux distributions, at
least those aimed at the desktop.  There have been very few
compatibility problems.  I am currently running Bacula with MariaDB on
the Arch Linux distribution and haven't encountered any issues so far,
but then my backups are only home-user sized.  As long as Bacula sticks
to the common feature set and doesn't use any of the new
Oracle-MySQL-specific features then it will probably continue to work
just fine with both MySQL and MariaDB for some time yet, with no special
maintenance effort.

If you did want to test against MySQL and MariaDB separately, this is
something you could easily do with Docker containers, as you can
install whichever versions of whichever packages you like inside the
containers, without affecting anything on your host machine.

Cheers,
Adam.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Windows backup much slower than Linux backup

2019-02-28 Thread Peter Milesson

Hi folks,

I'm backing up 2 servers with Bacula, one with Windows 2016, the other 
one with CentOS. The hardware is described below. The Windows server is 
much more powerful than the Linux server in all respects, and should 
theoretically deliver data to the Bacula server at a much higher rate. 
But in reality, the Linux server delivers data about 7 times faster over 
the network, than the Windows server.


Is this completely normal, or should I start to check up the Windows 
server for problems?


Best regards,

Peter


Windows server (file server, RDP-server, Hyper-V host with 2 very 
lightly loaded VMs)

=
Hardware: HP DL180 Gen9, Intel Xeon E5-2683v4, 48GB RAM, Smart Array 
P440 Controller, 6x SAS 1GB (7200 rpm, 12 Gb/s) in RAID5

Network: 2x 10GbE to HPE 1950 switch (LACP)
OS: Windows 2016 (build 1607)
Throughput to Bacula server: 23-Feb 08:52 MySd JobId 991: Elapsed 
time=00:26:09, Transfer rate=4.071 M Bytes/second



Linux server (plain file server with Samba)
==
Hardware: HP DL120 Gen9, Intel Xeon E5-2603v3, 8GB RAM, HP Dynamic Smart 
Array B140i SATA Controller 2x SATA 2GB (7200 rpm) in RAID1

Network: 2x 1Gb to HPE 1950 switch (LACP)
OS: CentOS Linux 7.5 (1804)
Throughput to Bacula server: 23-Feb 08:26 MySd JobId 990: Elapsed 
time=00:26:08, Transfer rate=28.29 M Bytes/second



Bacula server
===
Hardware: standard motherboard with a 6-core AMD FX-6300 CPU, 4xSATA 8GB 
(7200 rpm) in RAID10

Network: Tehuti 10GbE NIC to ProCurve 2910al switch
OS: CentOS Linux 7.6 (1810)
Bacula server throughput to the RAID array: ca. 60 Mbytes/second

All switches are connected to our 10Gb/s optical network backbone.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-02-28 Thread Peter Milesson




On 01.03.2019 1:50, Adam Nielsen wrote:

I'm backing up 2 servers with Bacula, one with Windows 2016, the other
one with CentOS. The hardware is described below. The Windows server is
much more powerful than the Linux server in all respects, and should
theoretically deliver data to the Bacula server at a much higher rate.
But in reality, the Linux server delivers data about 7 times faster over
the network, than the Windows server.

It's very hard to say, because many small files will take longer to
transfer than a couple of large ones, and if there is other machine
activity reading or writing from the disk at the time then this can
sometimes slow the transfer speed dramatically, at least with magnetic
disks.

You could try to create a large file on both machines, say 10-20GB, and
then try to back up just that one file.  This should eliminate one
variable (many vs few files) and give you a better idea how different
the machines are.

Perhaps you could also run some disk speed tests locally on each
machine.  Linux has hdparm included in most distros, but I'm not sure of
a Windows equivalent although I'm sure there are many out there.

Because you only have a single average transfer speed for the Windows
machine, it's conceivable that a failing disk will work fine for a
while but then become stuck on a few sectors and sit there for even a
few minutes retrying the read operation before continuing.  This could
result in a slow overall transfer rate but a fast speed test, if the
speed test doesn't read any of the failing data areas.  If the speed
test comes back good, then it might be worth trying to find some SMART
tools for your disk/controller which can usually query the drives for
the number of errors they've been encountering recently.  If those are
high for one drive then that could explain what's going on.

Cheers,
Adam.

Hi Adam,

Thanks for your suggestions. It is true that the backup from the Windows 
server is mostly user profiles and (normal office) documents, while from 
the Linux server it's CAD and publishing data, where the files mostly 
are quite large. I have excluded the disk subsystem on the Windows 
server, as it is a true SAS RAID (not fake RAID) controller, where the 
RAID5 configuration is "overpopulated". A bad disk here should not have 
any impact on the transfer speed. I did run diagnostics on the RAID 
controller a month ago, and there are no error indications. The backups 
are running in the night, when there is no user activity on the network.


I'll give the large file approach a try during the weekend, and see 
where it gets me.


Best regards,

Peter





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-02-28 Thread Peter Milesson

Hi Heitor,

No network bottlenecks. There isn't a single 100Mbit device in the path. 
Both servers are connected to the same switch, and the path to the 
backup server is 10GbE all the way.


Thanks for your input.

Best regards,

Peter


On 01.03.2019 3:12, Heitor Faria wrote:

Hello Peter,

Your rate is indeed very slow. Perhaps a 100 Mbits Ethernet bottleneck?
Please consider the following, specially creating multiple FileSets to 
the same machine since current Windows FD still does not handle 
reading paralelization: http://bacula.us/tuning/


Regards,

Sent from TypeApp <http://www.typeapp.com/r?b=14554>
On Feb 28, 2019, at 2:45 PM, Peter Milesson <mailto:mi...@atmos.eu>> wrote:


Hi folks,

I'm backing up 2 servers with Bacula, one with Windows 2016, the other
one with CentOS. The hardware is described below. The Windows server is
much more powerful than the Linux server in all respects, and should
theoretically deliver data to the Bacula server at a much higher rate.
But in reality, the Linux server delivers data about 7 times faster over
the network, than the Windows server.

Is this completely normal, or should I start to check up the Windows
server for problems?

Best regards,

Peter


Windows server (file server, RDP-server, Hyper-V host with 2 very
lightly loaded VMs)


Hardware: HP DL180 Gen9, Intel Xeon E5-2683v4, 48GB RAM, Smart Array
P440 Controller, 6x SAS 1GB (7200 rpm, 12 Gb/s) in RAID5
Network: 2x 10GbE to HPE 1950 switch (LACP)
OS: Windows 2016 (build 1607)
Throughput to Bacula server: 23-Feb 08:52 MySd JobId 991: Elapsed
time=00:26:09, Transfer rate=4.071 M Bytes/second


Linux server (plain file server with Samba)


Hardware: HP DL120 Gen9, Intel Xeon E5-2603v3, 8GB RAM, HP Dynamic Smart
Array B140i SATA Controller 2x SATA 2GB (7200 rpm) in RAID1
Network: 2x 1Gb to HPE 1950 switch (LACP)
OS: CentOS Linux 7.5 (1804)
Throughput to Bacula server: 23-Feb 08:26 MySd JobId 990: Elapsed
time=00:26:08, Transfer rate=28.29 M Bytes/second


Bacula server
===
Hardware: standard motherboard with a 6-core AMD FX-6300 CPU, 4xSATA 8GB
(7200 rpm) in RAID10
Network: Tehuti 10GbE NIC to ProCurve 2910al switch
OS: CentOS Linux 7.6 (1810)
Bacula server throughput to the RAID array: ca. 60 Mbytes/second

All switches are connected to our 10Gb/s optical network backbone.





Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-01 Thread Peter Milesson

Hi Sergio,

You are right about the file sets. The Windows file set is about a 
quarter the size, compared to the Linux one. There are no database 
backups, just files. Neither have I seen any network bottlenecks. Both 
servers are connected to the same switch, with a 10GbE trunk directly to 
the backup server. And the backup jobs are scheduled at different times, 
so there should not be any network bottlenecks with competition for 
bandwidth. The Windows server is using NTFS, whereas the Linux server 
uses the ext4 file system.


I haven't got a clue if the the time it takes for creating the VSS 
snapshots is included in the total time, or if the time it takes is 
significant. I guess somebody with knowledge about the inner workings of 
Bacula FD could answer that question.


Best regards,

Peter


On 01.03.2019 8:35, Sergio Gelato wrote:

* Peter Milesson [2019-02-28 20:22:41 +0100]:

I'm backing up 2 servers with Bacula, one with Windows 2016, the other one
with CentOS. The hardware is described below. The Windows server is much
more powerful than the Linux server in all respects, and should
theoretically deliver data to the Bacula server at a much higher rate. But
in reality, the Linux server delivers data about 7 times faster over the
network, than the Windows server.

Is this completely normal, or should I start to check up the Windows server
for problems?

You didn't provide enough information. Are the filesets being backed up
comparable? (I mean in terms of number and size of files and fraction of
the filesystem being backed up.) It looks like both backups take nearly
the same wallclock time but end up with vastly different transfer rates,
so I'm guessing that the Windows backup job is a lot smaller. In fact,
the wallclock times are so close that I wonder if they aren't dominated
by some kind of fixed setup cost (slow database? network timeout issues?)

With a storage throughput of only 60 MB/s on the SD the difference in network
speeds (1Gb/s vs. 10Gb/s) is not going to be significant.

You also didn't mention the filesystems involved. NTFS on Windows and one
of the usual suspects (ext4, xfs, btrfs) on Linux? For backup jobs involving
lots of small files filesystem overhead is likely to make a difference.

How long does VSS snapshot creation take on the Windows side, by the way?
Is that overhead included in the average transfer rate calculation?

I suppose you meant TB instead of GB for the hard disk capacities.


Best regards,

Peter


Windows server (file server, RDP-server, Hyper-V host with 2 very lightly
loaded VMs)
=
Hardware: HP DL180 Gen9, Intel Xeon E5-2683v4, 48GB RAM, Smart Array P440
Controller, 6x SAS 1GB (7200 rpm, 12 Gb/s) in RAID5
Network: 2x 10GbE to HPE 1950 switch (LACP)
OS: Windows 2016 (build 1607)
Throughput to Bacula server: 23-Feb 08:52 MySd JobId 991: Elapsed
time=00:26:09, Transfer rate=4.071 M Bytes/second


Linux server (plain file server with Samba)
==
Hardware: HP DL120 Gen9, Intel Xeon E5-2603v3, 8GB RAM, HP Dynamic Smart
Array B140i SATA Controller 2x SATA 2GB (7200 rpm) in RAID1
Network: 2x 1Gb to HPE 1950 switch (LACP)
OS: CentOS Linux 7.5 (1804)
Throughput to Bacula server: 23-Feb 08:26 MySd JobId 990: Elapsed
time=00:26:08, Transfer rate=28.29 M Bytes/second


Bacula server
===
Hardware: standard motherboard with a 6-core AMD FX-6300 CPU, 4xSATA 8GB
(7200 rpm) in RAID10
Network: Tehuti 10GbE NIC to ProCurve 2910al switch
OS: CentOS Linux 7.6 (1810)
Bacula server throughput to the RAID array: ca. 60 Mbytes/second

All switches are connected to our 10Gb/s optical network backbone.







___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-01 Thread Peter Milesson

Hi Heitor,

Great! See the job log below. It's the last incremental job log, but it 
gives a good indication anyway (the actual Dir, Fd and Sd entries redacted).


Best regards,

Peter

28-Feb 23:05 MyDir JobId 1015: Start Backup JobId 1015, 
Job=Server2017.2019-02-28_23.05.00_03

28-Feb 23:05 MyDir JobId 1015: Using Device "ULT-0" to write.
28-Feb 23:05 MySd JobId 1015: Spooling data ...
28-Feb 23:06 Srv201701Fd JobId 1015: Generate VSS snapshots. 
Driver="Win64 VSS"

28-Feb 23:06 Srv201701Fd JobId 1015: Snapshot mount point: C:\
28-Feb 23:19 Srv201701Fd JobId 1015:  Could not stat 
"C:/Users/Administrator.MYDOM/AppData/Local/Microsoft/Windows/INetCache/Low/Content.IE5": 
ERR=System cannot find the path.


28-Feb 23:20 Srv201701Fd JobId 1015:  Could not stat 
"C:/Users/jojo/AppData/Local/Microsoft/Windows/INetCache/Low/Content.IE5": 
ERR=System cannot find the path.


28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "Task 
Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "VSS 
Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"System Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "ASR 
Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "FSRM 
Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"WIDWriter", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"Microsoft Hyper-V VSS Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"MSSearch Service Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "WMI 
Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): 
"Registry Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "NPS 
VSS Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 Srv201701Fd JobId 1015: VSS Writer (BackupComplete): "COM+ 
REGDB Writer", State: 0x1 (VSS_WS_STABLE)
28-Feb 23:24 MySd JobId 1015: Committing spooled data to Volume 
"F01028L5". Despooling 2,218,489,341 bytes ...
28-Feb 23:25 MySd JobId 1015: Despooling elapsed time = 00:00:38, 
Transfer rate = 58.38 M Bytes/second
28-Feb 23:25 MySd JobId 1015: Elapsed time=00:19:14, Transfer rate=1.920 
M Bytes/second
28-Feb 23:25 MySd JobId 1015: Sending spooled attrs to the Director. 
Despooling 2,564,116 bytes ...

28-Feb 23:25 MyDir JobId 1015: Bacula MyDir 9.0.6 (20Nov17):
  Build OS:   x86_64-redhat-linux-gnu redhat (Core)
  JobId:  1015
  Job:    Server2017.2019-02-28_23.05.00_03
  Backup Level:   Incremental, since=2019-02-27 23:06:26
  Client: "Srv201701Fd" 7.4.4 (28Sep16) Microsoft 
Standard Edition (build 9200), 64-bit,Cross-compile,Win64

  FileSet:    "Server2017Set" 2018-02-01 21:34:30
  Pool:   "Default" (From Job resource)
  Catalog:    "MyCatalog" (From Client resource)
  Storage:    "MySd" (From Pool resource)
  Scheduled time: 28-Feb-2019 23:05:00
  Start time: 28-Feb-2019 23:05:58
  End time:   28-Feb-2019 23:25:24
  Elapsed time:   19 mins 26 secs
  Priority:   10
  FD Files Written:   9,141
  SD Files Written:   9,141
  FD Bytes Written:   2,214,193,994 (2.214 GB)
  SD Bytes Written:   2,216,237,073 (2.216 GB)
  Rate:   1899.0 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:   yes
  Encryption: no
  Accurate:   no
  Volume name(s): F01028L5
  Volume Session Id:  2
  Volume Session Time:    1551386754
  Last Volume Bytes:  113,804,070,912 (113.8 GB)
  Non-fatal FD errors:    2
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:    Backup OK -- with warnings


On 01.03.2019 13:21, Heitor Faria wrote:

Hello Peter,

Time for VSS generation is included and sometimes can be very slow.
I recommend you to post your job log.

Regards,

Sent from TypeApp <http://www.typeapp.com/r?b=14554>
On Mar 1, 2019, at 3:21 AM, Peter Milesson <mailto:mi..

Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-01 Thread Peter Milesson

Hi Uwe,

Thanks for your input. Something similar is what I would expect. I'm 
going to try some of the previous suggestions during the weekend, and 
see if there are some identifiable bottlenecks.


Best regards

On 01.03.2019 14:42, Uwe Schuerkamp wrote:

I just checked our installation (direct-to-tape backup, lto5, LAN
gigabit connectivity), and I'm not seeing any significant performance
issues between windows and Linux clients.

The evidence is naturally anecdotal though as several backups are
running concurrently, but I'm not seeing anything out of the ordinary
here.

Example for a rather large windows machine:

Full5252701 3.31 TB 2019-02-22 17:00:00 2019-02-23 
13:15:21 20:15:2147.64 MB/s

Linux machine, different client, same bacula host (9.4.2):

Full130206  2.13 TB 2019-02-25 21:17:34 2019-02-26 06:16:48 
08:59:1469.15 MB/s

The windows machine has many small files (file server), so this fact
alone is probably enough to explain the 25mb/sec difference in
throughput.

All the best,

Uwe













___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-01 Thread Peter Milesson

Hi Josh,

With the current settings, last access updates where disabled for 
Windows, and neither ATIME nor NOATIME for the Linux server. So in the 
current setup, the Linux server was at a disadvantage. I changed the 
network buffer to 32k on the Windows server, and I'll be wiser tomorrow, 
if it helped.


Thanks for the advice.

Best regards,

Peter

Dne 01.03.2019 v 16:40 Josh Fisher napsal(a):
I also attribute this to Windows inefficiencies, particularly in NTFS 
handling of small files.However, I am not sure that those 
inefficiencies explain a greater than 50% performance hit. Two quick 
changes come to mind that may help.


1. Change MaximumNetworkBufferSize to 32k in bacula-fd.conf. Windows 
has been known to dislike the default 64k network buffer size.


2. Set the DWORD value NtfsDisableLastAccessUpdate in 
HKLM/system/CurrentControlSet/Control/FileSystem to nonzero to prevent 
a write to the disk each time a file is accessed. It is the NTFS 
equivalent of the NOATIME option used in ext4 and other Linux 
filesystems. For Windows file servers holding lots and lots of small 
files, those last access updates add up to quite a lot of disk 
activity. Generally, last access time is not needed or all that 
useful. In particular, if NOATIME is being used on the Linux client 
and NtfsDisableLastAccessUpdate = 0 on the Windows client, then you 
are not comparing apples to apples.



On 3/1/2019 6:48 AM, Kern Sibbald wrote:

Hello,

I have noticed similar things.  I have always attributed the slower
speed on Windows due to the fact that Microsoft hired the best students
from the best schools but most of them knew nothing about programming
and programming history (in particular Unix), thus these geniuses
re-invented the OS wheel in designing and building a monolithic
operating system that took about 10 times as much code as it took Unix
(and subsequently Linux).  To me it is not surprising that Windows had
more bugs than Linux (despite huge advances, it probably still has more
bugs).  In any case, programming Windows for a Linux programmer is a
nightmare -- 10 times harder to do almost anything, because there are
far more OS calls; they all have different arguments; many of which are
not well or not at all document, ...

So, I have just attributed this to being normal Windows inefficiencies.

Of course, the above is sort of a gut feeling.  Perhaps someone can do
some real performance testing and figure out what is really going on.

Best regards,
Kern

On 2/28/19 8:22 PM, Peter Milesson wrote:

Hi folks,

I'm backing up 2 servers with Bacula, one with Windows 2016, the other
one with CentOS. The hardware is described below. The Windows server
is much more powerful than the Linux server in all respects, and
should theoretically deliver data to the Bacula server at a much
higher rate. But in reality, the Linux server delivers data about 7
times faster over the network, than the Windows server.

Is this completely normal, or should I start to check up the Windows
server for problems?

Best regards,

Peter


Windows server (file server, RDP-server, Hyper-V host with 2 very
lightly loaded VMs)
=
Hardware: HP DL180 Gen9, Intel Xeon E5-2683v4, 48GB RAM, Smart Array
P440 Controller, 6x SAS 1GB (7200 rpm, 12 Gb/s) in RAID5
Network: 2x 10GbE to HPE 1950 switch (LACP)
OS: Windows 2016 (build 1607)
Throughput to Bacula server: 23-Feb 08:52 MySd JobId 991: Elapsed
time=00:26:09, Transfer rate=4.071 M Bytes/second


Linux server (plain file server with Samba)
==
Hardware: HP DL120 Gen9, Intel Xeon E5-2603v3, 8GB RAM, HP Dynamic
Smart Array B140i SATA Controller 2x SATA 2GB (7200 rpm) in RAID1
Network: 2x 1Gb to HPE 1950 switch (LACP)
OS: CentOS Linux 7.5 (1804)
Throughput to Bacula server: 23-Feb 08:26 MySd JobId 990: Elapsed
time=00:26:08, Transfer rate=28.29 M Bytes/second


Bacula server
===
Hardware: standard motherboard with a 6-core AMD FX-6300 CPU, 4xSATA
8GB (7200 rpm) in RAID10
Network: Tehuti 10GbE NIC to ProCurve 2910al switch
OS: CentOS Linux 7.6 (1810)
Bacula server throughput to the RAID array: ca. 60 Mbytes/second

All switches are connected to our 10Gb/s optical network backbone.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-03 Thread Peter Milesson

Hi folks,

I did some testing during the weekend.

 * When backing up a huge file (> 10GB), the Windows transfer rate is
   comparable to the Linux transfer rate (32 Mb/s).
 * Setting the file daemon buffer to 32k on the Windows server seemed
   to help, but not very much.
 * The Windows backup transfer rate is still a lot slower than expected
   (22 Mb/s) for a full backup of 270 GB (298000 files), whereas the
   Linux backup is  35 Mb/s for a full backup of 783 GB (461500 files).

The next thing I'm going to try is moving all overhead off of the 
Windows server. It will take a while to move things around, as I need to 
get a new server for this.


Best regards,

Peter

On 01.03.2019 17:27, Peter Milesson wrote:

Hi Josh,

With the current settings, last access updates where disabled for 
Windows, and neither ATIME nor NOATIME for the Linux server. So in the 
current setup, the Linux server was at a disadvantage. I changed the 
network buffer to 32k on the Windows server, and I'll be wiser 
tomorrow, if it helped.


Thanks for the advice.

Best regards,

Peter

Dne 01.03.2019 v 16:40 Josh Fisher napsal(a):
I also attribute this to Windows inefficiencies, particularly in NTFS 
handling of small files.However, I am not sure that those 
inefficiencies explain a greater than 50% performance hit. Two quick 
changes come to mind that may help.


1. Change MaximumNetworkBufferSize to 32k in bacula-fd.conf. Windows 
has been known to dislike the default 64k network buffer size.


2. Set the DWORD value NtfsDisableLastAccessUpdate in 
HKLM/system/CurrentControlSet/Control/FileSystem to nonzero to 
prevent a write to the disk each time a file is accessed. It is the 
NTFS equivalent of the NOATIME option used in ext4 and other Linux 
filesystems. For Windows file servers holding lots and lots of small 
files, those last access updates add up to quite a lot of disk 
activity. Generally, last access time is not needed or all that 
useful. In particular, if NOATIME is being used on the Linux client 
and NtfsDisableLastAccessUpdate = 0 on the Windows client, then you 
are not comparing apples to apples.



On 3/1/2019 6:48 AM, Kern Sibbald wrote:

Hello,

I have noticed similar things.  I have always attributed the slower
speed on Windows due to the fact that Microsoft hired the best students
from the best schools but most of them knew nothing about programming
and programming history (in particular Unix), thus these geniuses
re-invented the OS wheel in designing and building a monolithic
operating system that took about 10 times as much code as it took Unix
(and subsequently Linux).  To me it is not surprising that Windows had
more bugs than Linux (despite huge advances, it probably still has more
bugs).  In any case, programming Windows for a Linux programmer is a
nightmare -- 10 times harder to do almost anything, because there are
far more OS calls; they all have different arguments; many of which are
not well or not at all document, ...

So, I have just attributed this to being normal Windows inefficiencies.

Of course, the above is sort of a gut feeling.  Perhaps someone can do
some real performance testing and figure out what is really going on.

Best regards,
Kern

On 2/28/19 8:22 PM, Peter Milesson wrote:

Hi folks,

I'm backing up 2 servers with Bacula, one with Windows 2016, the other
one with CentOS. The hardware is described below. The Windows server
is much more powerful than the Linux server in all respects, and
should theoretically deliver data to the Bacula server at a much
higher rate. But in reality, the Linux server delivers data about 7
times faster over the network, than the Windows server.

Is this completely normal, or should I start to check up the Windows
server for problems?

Best regards,

Peter


Windows server (file server, RDP-server, Hyper-V host with 2 very
lightly loaded VMs)
=
Hardware: HP DL180 Gen9, Intel Xeon E5-2683v4, 48GB RAM, Smart Array
P440 Controller, 6x SAS 1GB (7200 rpm, 12 Gb/s) in RAID5
Network: 2x 10GbE to HPE 1950 switch (LACP)
OS: Windows 2016 (build 1607)
Throughput to Bacula server: 23-Feb 08:52 MySd JobId 991: Elapsed
time=00:26:09, Transfer rate=4.071 M Bytes/second


Linux server (plain file server with Samba)
==
Hardware: HP DL120 Gen9, Intel Xeon E5-2603v3, 8GB RAM, HP Dynamic
Smart Array B140i SATA Controller 2x SATA 2GB (7200 rpm) in RAID1
Network: 2x 1Gb to HPE 1950 switch (LACP)
OS: CentOS Linux 7.5 (1804)
Throughput to Bacula server: 23-Feb 08:26 MySd JobId 990: Elapsed
time=00:26:08, Transfer rate=28.29 M Bytes/second


Bacula server
===
Hardware: standard motherboard with a 6-core AMD FX-6300 CPU, 4xSATA
8GB (7200 rpm) in RAID10
Network: Tehuti 10GbE NIC to ProCurve 2910al switch
OS: CentOS Linux 7.6 (1810)
Bacula server throughput to the RAID array: ca. 60 Mbytes/second

All switches are connected to our 10Gb/s optica

Re: [Bacula-users] Windows backup slow

2020-04-03 Thread Peter Milesson



On 2020-04-01 11:28, Andrew Watkins wrote:

Hello,

Just started using Bacula and at this time (early stages) I find my 
UNIX/Solaris full backups are running at a good speed, but our window 
server is slow. There is a chance it is just the number of files (Yes, 
I am ignoring profiles). My questions:


1) I have added "Maximum Concurrent Jobs" to my clients FileDaemon, 
but is there a way to prove that Windows client is using it?


2) Any web links to how I can monitor a client backup, to examine what 
is happening.


Thanks

Andrew


Hi Andrew,

When you brought it up, I had a look at my setup. I'm backing up both 
Linux and Windows servers.


The Linux server backups are running at about 30 Mbyte/s (2x1Gbit NICs) 
with line compression, whereas the Windows server backups are running at 
about 22 Mbytes/s (2x10Gbit NICs) without compression. The connection is 
10 Gbit from all servers to the Bacula backup server. The values 
mentioned are for full monthly server backups (a couple of TBs).


What I did notice however, is that small incremental backups are an 
order of magnitude slower for the Windows servers, compared to the Linux 
servers.


I'm not going to speculate, but compression would probably speed up 
things somewhat for the Windows backups, however not significantly. 
Also, VSS snapshots under Windows may have a huge impact on the overall 
performance for smaller backup sets.


For me, the current performance is sufficient, but there is certainly 
lots of room for tweaking. I've been running this setup for about 9 
years now, just improving the hardware now and then. Don't fix what's 
working...


Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup slow

2020-04-21 Thread Peter Milesson



On 2020-04-20 12:37, Andrew Watkins wrote:

Hi,

Still not having much luck with speed of my Windows backup, so any 
pointers?

Even when I used Networker backup windows was slower but not this bad!

Solaris Client Full: many filesystems

  Elapsed time:   10 hours 31 mins 14 secs
  Priority:   10
  FD Files Written:   5,670,614
  SD Files Written:   5,670,614
  FD Bytes Written:   775,767,526,417 (775.7 GB)
  SD Bytes Written:   776,760,355,024 (776.7 GB)
  Rate:   20482.9 KB/s
  Software Compression:   None
  Comm Line Compression:  75.0% 4.0:1
  Snapshot/VSS:   no
  Encryption: no
  Accurate:   no

Windows Client Full: 2 filesystems with many exclusions (exclude 
profiles, etc)


  Elapsed time:   22 hours 39 mins 41 secs
  Priority:   10
  FD Files Written:   3,653,666
  SD Files Written:   3,653,666
  FD Bytes Written:   303,603,777,832 (303.6 GB)
  SD Bytes Written:   304,379,400,097 (304.3 GB)
  Rate:   3721.5 KB/s
  Software Compression:   None
  Comm Line Compression:  30.2% 1.4:1
  Snapshot/VSS:   yes
  Encryption: no
  Accurate:   no


On 4/3/2020 5:35 PM, Peter Milesson wrote:


On 2020-04-01 11:28, Andrew Watkins wrote:

Hello,

Just started using Bacula and at this time (early stages) I find my 
UNIX/Solaris full backups are running at a good speed, but our 
window server is slow. There is a chance it is just the number of 
files (Yes, I am ignoring profiles). My questions:


1) I have added "Maximum Concurrent Jobs" to my clients FileDaemon, 
but is there a way to prove that Windows client is using it?


2) Any web links to how I can monitor a client backup, to examine 
what is happening.


Thanks

Andrew


Hi Andrew,

When you brought it up, I had a look at my setup. I'm backing up both 
Linux and Windows servers.


The Linux server backups are running at about 30 Mbyte/s (2x1Gbit 
NICs) with line compression, whereas the Windows server backups are 
running at about 22 Mbytes/s (2x10Gbit NICs) without compression. The 
connection is 10 Gbit from all servers to the Bacula backup server. 
The values mentioned are for full monthly server backups (a couple of 
TBs).


What I did notice however, is that small incremental backups are an 
order of magnitude slower for the Windows servers, compared to the 
Linux servers.


I'm not going to speculate, but compression would probably speed up 
things somewhat for the Windows backups, however not significantly. 
Also, VSS snapshots under Windows may have a huge impact on the 
overall performance for smaller backup sets.


For me, the current performance is sufficient, but there is certainly 
lots of room for tweaking. I've been running this setup for about 9 
years now, just improving the hardware now and then. Don't fix what's 
working...


Best regards,

Peter 



Thanks,

Andrew


Hi Andrew,

Here's part of the log from one Windows server, the total volume is 
quite similar to yours, but you seem to have got much smaller files:


  Elapsed time:   3 hours 57 mins 51 secs
  Priority:   10
  FD Files Written:   368,404
  SD Files Written:   368,404
  FD Bytes Written:   318,235,741,624 (318.2 GB)
  SD Bytes Written:   318,313,424,664 (318.3 GB)
  Rate:   22299.5 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:   yes
  Encryption: no
  Accurate:   no

The server is a HPE ProLiant DL-180 Gen9 with SAS-drives (15000 rpm) in 
RAID-5, Windows 2016


A couple of points to look at:

 * Network performance (I have got a 10Gbit link directly from the
   server, so compression probably doesn't make sense)
 * Storage performance (I have got 4 SATA disks (7200 rpm) in RAID 10
   as virtual tapes)

I have noticed a similar performance as yours when making incremental 
backups, but those backups are less than 10Gb in volume.


Best regards,

Peter

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir 9.4.4 runnig on Centos 6.4 host and bacula-fd 9.4.4 running on Centos 7.8.

2020-04-23 Thread Peter Milesson

Hi folks,

I'm running CentOS 7.7 in the Bacula server and have also got one client 
running CentOS 7.7. Never had any problems with firewalld. Just keep 
firewalld running, and make sure the appropriate ports are allowed 
(default 9101, 9102, 9103), plus other ports you need. Using firewalld 
or iptables is just a matter of personal taste/convenience.


Best regards,

Peter


On 2020-04-23 20:17, dmaziuk via Bacula-users wrote:

On 4/23/2020 7:45 AM, Martin Simmons wrote:
Check if the Centos 7.8 host is running a firewall (e.g. run iptables 
-L -v).


Centos 7 installs firewalld by default. Disable it (don't remove it) 
and install iptables-services instead.


Dima


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fatal error: catreq.c:751 fread attr spool error. Wanted 1478509141 bytes, maximum permitted 10000000 bytes

2021-05-01 Thread Peter Milesson

Hi folks,

When running monthly full backup on a Linux server, the following error 
occurred in the final steps of the backup process, when despooling 
attributes. Previous log entry was: Sending spooled attrs to the 
Director. Despooling 180,651,607 bytes ...


Fatal error: catreq.c:751 fread attr spool error. Wanted 1478509141 
bytes, maximum permitted 1000 bytes


It has been reported previously in this list on 15 February 2021, 
without any obvious solution.


The number of backed up files are 576,816, and the backup size is 
1,024,746,654,774 (1.024 TB).


Both the backup server and client runs Bacula 9.0.6, and both with OS 
CentOS 7.8. This is the first time I have seen this message since I 
installed the system 4 years ago.


There is plenty of space in the spool directory, plenty of RAM, the data 
is directly written to virtual tapes (mhvtl) without data spooling, and 
there is no line compression, as the network connection is 10Gbits/s, 
with the bottleneck being the disk subsystem on the backup server (RAID10).


Suggestions are very welcome.

Best regards,

Peter






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: catreq.c:751 fread attr spool error. Wanted 1478509141 bytes, maximum permitted 10000000 bytes

2021-05-02 Thread Peter Milesson

Hi folks,

Answering my own question. Installing bacula 9.2.2 solved this problem.

Best regards,

Peter

On 2021-05-01 15:05, Peter Milesson wrote:

Hi folks,

When running monthly full backup on a Linux server, the following 
error occurred in the final steps of the backup process, when 
despooling attributes. Previous log entry was: Sending spooled attrs 
to the Director. Despooling 180,651,607 bytes ...


Fatal error: catreq.c:751 fread attr spool error. Wanted 1478509141 
bytes, maximum permitted 1000 bytes


It has been reported previously in this list on 15 February 2021, 
without any obvious solution.


The number of backed up files are 576,816, and the backup size is 
1,024,746,654,774 (1.024 TB).


Both the backup server and client runs Bacula 9.0.6, and both with OS 
CentOS 7.8. This is the first time I have seen this message since I 
installed the system 4 years ago.


There is plenty of space in the spool directory, plenty of RAM, the 
data is directly written to virtual tapes (mhvtl) without data 
spooling, and there is no line compression, as the network connection 
is 10Gbits/s, with the bottleneck being the disk subsystem on the 
backup server (RAID10).


Suggestions are very welcome.

Best regards,

Peter






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup catalog error

2021-05-05 Thread Peter Milesson

Hi folks,

After upgrading to Bacula 9.2.2, the following backup error occurs when 
the daily catalog backup is run:


05-May 05:00 MyDir JobId 3087: shell command: run BeforeJob 
"/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
05-May 05:00 MyDir JobId 3087: Start Backup JobId 3087, 
Job=BackupCatalog.2021-05-05_05.00.00_07
05-May 05:00 MyDir JobId 3087: Using Device "ULT-0" to write.
05-May 05:00 BackupsrvFd JobId 3087:  Could not stat 
"/spool/bacula/bacula.sql": ERR=No such file or directory
05-May 05:00 MySd JobId 3087: Elapsed time=00:00:01, Transfer rate=0  
Bytes/second
05-May 05:00 MySd JobId 3087: Sending spooled attrs to the Director. Despooling 
0 bytes ...
05-May 05:00 MyDir JobId 3087: Bacula MyDir 9.2.2 (06Nov18):
  Build OS:   x86_64-redhat-linux-gnu-bacula redhat Enterprise 
release
  JobId:  3087
  Job:BackupCatalog.2021-05-05_05.00.00_07
  Backup Level:   Full
  Client: "BackupsrvFd" 9.2.2 (06Nov18) 
x86_64-redhat-linux-gnu-bacula,redhat,Enterprise release
  FileSet:"Catalog" 2021-05-03 05:00:00
  Pool:   "Default" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"MySd" (From Pool resource)
  Scheduled time: 05-May-2021 05:00:00
  Start time: 05-May-2021 05:00:27
  End time:   05-May-2021 05:00:28
  Elapsed time:   1 sec
  Priority:   11
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:   no
  Encryption: no
  Accurate:   no
  Volume name(s):
  Volume Session Id:  6
  Volume Session Time:1620071155
  Last Volume Bytes:  52,290,846,720 (52.29 GB)
  Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK -- with warnings

05-May 05:00 MyDir JobId 3087: Begin pruning Jobs older than 6 months .
05-May 05:00 MyDir JobId 3087: Pruned 1 Job for client BackupsrvFd from catalog.
05-May 05:00 MyDir JobId 3087: Begin pruning Files.
05-May 05:00 MyDir JobId 3087: Pruned Files from 1 Jobs for client BackupsrvFd 
from catalog.
05-May 05:00 MyDir JobId 3087: End auto prune.

05-May 05:00 MyDir JobId 3087: shell command: run AfterJob 
"/opt/bacula/scripts/delete_catalog_backup"

The normal backups seem to terminate without any errors. It's quite 
clear that the bacula.sql file cannot be created or written, which 
triggers the error message. The spool directory /spool/bacula is 
otherwise used by Bacula during normal backups without any error 
messages. The directory /spool/bacula is owned by bacula:bacula and with 
permissions 0750.


I assume that the MySQL database bacula is backed up in this job. If it 
is so, I do not worry very much, as there is a cron job run separately, 
that backs up the bacula database to an external server.


I'm grateful for any input concerning this problem.

Best regards,

Peter




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog error

2021-05-06 Thread Peter Milesson




On 2021-05-06 15:51, Josip Deanovic wrote:

On Wednesday 2021-05-05 20:53:42 Peter Milesson wrote:

Hi folks,

After upgrading to Bacula 9.2.2, the following backup error occurs when
the daily catalog backup is run:

[...]


The normal backups seem to terminate without any errors. It's quite
clear that the bacula.sql file cannot be created or written, which
triggers the error message. The spool directory /spool/bacula is
otherwise used by Bacula during normal backups without any error
messages. The directory /spool/bacula is owned by bacula:bacula and with
permissions 0750.

I assume that the MySQL database bacula is backed up in this job. If it
is so, I do not worry very much, as there is a cron job run separately,
that backs up the bacula database to an external server.

I'm grateful for any input concerning this problem.

Hello!

Are you sure it is not defined in your FileSet named "Catalog"?


Regards!


Hi Josip,

It worked under Bacula 9.0.6. I'm using the same configuration files, 
just with tweaks for different paths. The bacula.sql file is defined in 
the FileSet named "Catalog" as /spool/bacula/bacula.sql. So it really 
should work. I will try an even later version during the weekend.


Thanks for your input.

Peter




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog error

2021-05-07 Thread Peter Milesson




On 2021-05-07 12:01, Martin Simmons wrote:

On Thu, 6 May 2021 21:54:54 +0200, Peter Milesson said:

On 2021-05-06 15:51, Josip Deanovic wrote:

On Wednesday 2021-05-05 20:53:42 Peter Milesson wrote:

Hi folks,

After upgrading to Bacula 9.2.2, the following backup error occurs when
the daily catalog backup is run:

[...]


The normal backups seem to terminate without any errors. It's quite
clear that the bacula.sql file cannot be created or written, which
triggers the error message. The spool directory /spool/bacula is
otherwise used by Bacula during normal backups without any error
messages. The directory /spool/bacula is owned by bacula:bacula and with
permissions 0750.

I assume that the MySQL database bacula is backed up in this job. If it
is so, I do not worry very much, as there is a cron job run separately,
that backs up the bacula database to an external server.

I'm grateful for any input concerning this problem.

Hello!

Are you sure it is not defined in your FileSet named "Catalog"?


Regards!


Hi Josip,

It worked under Bacula 9.0.6. I'm using the same configuration files,
just with tweaks for different paths. The bacula.sql file is defined in
the FileSet named "Catalog" as /spool/bacula/bacula.sql. So it really
should work. I will try an even later version during the weekend.

What is the WorkingDirectory in your bacula-dir.conf?  The
make_catalog_backup.pl script writes to bacula.sql file to that directory.

__Martin


Hi Martin,

The working directory is the same, that is /spool/bacula

There is about 2TB free space in the /spool volume, so that is not the 
bottleneck. Probably the bacula.sql file is not created at all, as the 
Catalog backup terminates within a few seconds. If I start a regular 
backup on the bacula database (mysqldump), it runs for at least a couple 
of minutes. At least I assume the bacula.sql file is a backup of the 
bacula database. I'm no expert in Perl, so with my little knowledge, it 
seems to be the case. Do bacula-dir use the wrong permissions (uid, gid) 
when creating the file?


Thanks for your input.

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog error

2021-05-07 Thread Peter Milesson




On 2021-05-07 13:12, Martin Simmons wrote:

On Fri, 7 May 2021 12:37:32 +0200, Peter Milesson said:

On 2021-05-07 12:01, Martin Simmons wrote:

On Thu, 6 May 2021 21:54:54 +0200, Peter Milesson said:

On 2021-05-06 15:51, Josip Deanovic wrote:

On Wednesday 2021-05-05 20:53:42 Peter Milesson wrote:

Hi folks,

After upgrading to Bacula 9.2.2, the following backup error occurs when
the daily catalog backup is run:

[...]


The normal backups seem to terminate without any errors. It's quite
clear that the bacula.sql file cannot be created or written, which
triggers the error message. The spool directory /spool/bacula is
otherwise used by Bacula during normal backups without any error
messages. The directory /spool/bacula is owned by bacula:bacula and with
permissions 0750.

I assume that the MySQL database bacula is backed up in this job. If it
is so, I do not worry very much, as there is a cron job run separately,
that backs up the bacula database to an external server.

I'm grateful for any input concerning this problem.

Hello!

Are you sure it is not defined in your FileSet named "Catalog"?


Regards!


Hi Josip,

It worked under Bacula 9.0.6. I'm using the same configuration files,
just with tweaks for different paths. The bacula.sql file is defined in
the FileSet named "Catalog" as /spool/bacula/bacula.sql. So it really
should work. I will try an even later version during the weekend.

What is the WorkingDirectory in your bacula-dir.conf?  The
make_catalog_backup.pl script writes to bacula.sql file to that directory.

__Martin


Hi Martin,

The working directory is the same, that is /spool/bacula

There is about 2TB free space in the /spool volume, so that is not the
bottleneck. Probably the bacula.sql file is not created at all, as the
Catalog backup terminates within a few seconds. If I start a regular
backup on the bacula database (mysqldump), it runs for at least a couple
of minutes. At least I assume the bacula.sql file is a backup of the
bacula database. I'm no expert in Perl, so with my little knowledge, it
seems to be the case. Do bacula-dir use the wrong permissions (uid, gid)
when creating the file?

Yes, bacula.sql file is a backup of the bacula database when it works :-)

bacula-dir runs as whatever uid/gid you pass on the command line (you can
check with ps).

I would expect to see any permission errors in the bacula log if
make_catalog_backup.pl fails.

You could edit the make_catalog_backup.pl to print the command by looking for
the line like this:

 exec("HOME='$wd' mysqldump -f --opt $args{db_name} > 
'$wd/$args{db_name}.sql'");

and adding a copy of this line before the exec line with exec replaced by
print.

Also, make_catalog_backup.pl should create the file /spool/bacula/.my.cnf
containing the MySQL connection details from bacula-dir.conf so check that
these are correct.

__Martin

Hi Martin,

Problem solved. The working path in make_catalog_backup.pl was hardcoded 
to /opt/bacula/working and not assigned from the dbcheck command output. 
I just replaced the faulty path with /spool/bacula, and run the catalog 
backup without errors.


Thanks for pointing me in the right direction.

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog error

2021-05-07 Thread Peter Milesson




On 2021-05-07 16:05, Martin Simmons wrote:

On Fri, 7 May 2021 13:40:03 +0200, Peter Milesson said:

On 2021-05-07 13:12, Martin Simmons wrote:

On Fri, 7 May 2021 12:37:32 +0200, Peter Milesson said:

On 2021-05-07 12:01, Martin Simmons wrote:

On Thu, 6 May 2021 21:54:54 +0200, Peter Milesson said:

On 2021-05-06 15:51, Josip Deanovic wrote:
On Wednesday 2021-05-05 20:53:42 Peter Milesson wrote:

Hi folks,

After upgrading to Bacula 9.2.2, the following backup error occurs when
the daily catalog backup is run:

[...]

The normal backups seem to terminate without any errors. It's quite
clear that the bacula.sql file cannot be created or written, which
triggers the error message. The spool directory /spool/bacula is
otherwise used by Bacula during normal backups without any error
messages. The directory /spool/bacula is owned by bacula:bacula and with
permissions 0750.

I assume that the MySQL database bacula is backed up in this job. If it
is so, I do not worry very much, as there is a cron job run separately,
that backs up the bacula database to an external server.

I'm grateful for any input concerning this problem.

Hello!
Are you sure it is not defined in your FileSet named "Catalog"?



Regards!
Hi Josip,

It worked under Bacula 9.0.6. I'm using the same configuration files,
just with tweaks for different paths. The bacula.sql file is defined in
the FileSet named "Catalog" as /spool/bacula/bacula.sql. So it really
should work. I will try an even later version during the weekend.

What is the WorkingDirectory in your bacula-dir.conf?  The
make_catalog_backup.pl script writes to bacula.sql file to that directory.

__Martin


Hi Martin,

The working directory is the same, that is /spool/bacula

There is about 2TB free space in the /spool volume, so that is not the
bottleneck. Probably the bacula.sql file is not created at all, as the
Catalog backup terminates within a few seconds. If I start a regular
backup on the bacula database (mysqldump), it runs for at least a couple
of minutes. At least I assume the bacula.sql file is a backup of the
bacula database. I'm no expert in Perl, so with my little knowledge, it
seems to be the case. Do bacula-dir use the wrong permissions (uid, gid)
when creating the file?

Yes, bacula.sql file is a backup of the bacula database when it works :-)

bacula-dir runs as whatever uid/gid you pass on the command line (you can
check with ps).

I would expect to see any permission errors in the bacula log if
make_catalog_backup.pl fails.

You could edit the make_catalog_backup.pl to print the command by looking for
the line like this:

  exec("HOME='$wd' mysqldump -f --opt $args{db_name} > 
'$wd/$args{db_name}.sql'");

and adding a copy of this line before the exec line with exec replaced by
print.

Also, make_catalog_backup.pl should create the file /spool/bacula/.my.cnf
containing the MySQL connection details from bacula-dir.conf so check that
these are correct.

__Martin

Hi Martin,

Problem solved. The working path in make_catalog_backup.pl was hardcoded
to /opt/bacula/working and not assigned from the dbcheck command output.
I just replaced the faulty path with /spool/bacula, and run the catalog
backup without errors.

Thanks for pointing me in the right direction.

Glad that worked.

Any idea what went wrong with the output of dbcheck?  Did it contain the
working_dir= line?

__Martin


Hi Martin,

The output of dbcheck is absolutely OK. However, the script does not 
pickup the contents of the working_dir= line. There is a variable $wd in 
the script, which got the fixed value /opt/bacula/working, but I don't 
see any assignment from the proper field in the $dir_conf variable to $wd.


I'll let it be for now, as it is a quite outdated version of Bacula 
(9.2.2). I will upgrade to the latest version in the 9-series, and stay 
with it until it's time to send the backup computer to /dev/null (which 
will be sometime close to the end of this year).


Thanks for your help, very appreciated!

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual tapes or virtual disks

2022-01-21 Thread Peter Milesson via Bacula-users

Hi folks,

I am building a new backup server that is going to replace the old one. 
The old backup server uses Bacula ver. 9.2.2 with a virtual tape library 
(mhvtl). I have found mhvtl a bit tricky, mostly when updating the OS 
(CentOS 7.8), as it is necessary to recompile the mhvtl kernel driver. 
Mhvtl also seems a bit outdated, with intermittent development and 
updates, also problematic with newer and coming Linux kernel versions. I 
also want to jump off the RedHat train, as I see it deviate more and 
more from mainstream Linux. Therefore, I would prefer to choose another 
solution.


I have studied the Bacula documentation and it seems to be possible to 
use disk based backup with auto changer role. I plan to use volumes with 
a size of around 200Gbytes, making the setup fairly flexible, not making 
a too big hole when volumes need to be purged. Total disk space is 
around 30TB.


If somebody has got experience with disk based, multi volume Bacula 
backup, I would be grateful about some information (tips, what to 
expect, pitfalls, etc.).


Best regards,

Peter


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-21 Thread Peter Milesson via Bacula-users




On 21.01.2022 14:56, Josip Deanovic wrote:

On 2022-01-21 14:03, Peter Milesson via Bacula-users wrote:

Hi folks,

I am building a new backup server that is going to replace the old
one. The old backup server uses Bacula ver. 9.2.2 with a virtual tape
library (mhvtl). I have found mhvtl a bit tricky, mostly when updating
the OS (CentOS 7.8), as it is necessary to recompile the mhvtl kernel
driver. Mhvtl also seems a bit outdated, with intermittent development
and updates, also problematic with newer and coming Linux kernel
versions. I also want to jump off the RedHat train, as I see it
deviate more and more from mainstream Linux. Therefore, I would prefer
to choose another solution.

I have studied the Bacula documentation and it seems to be possible to
use disk based backup with auto changer role. I plan to use volumes
with a size of around 200Gbytes, making the setup fairly flexible, not
making a too big hole when volumes need to be purged. Total disk space
is around 30TB.

If somebody has got experience with disk based, multi volume Bacula
backup, I would be grateful about some information (tips, what to
expect, pitfalls, etc.).



Hi Peter,

I am using file volumes for many years and I can't recall any
special tips or pitfalls worth mentioning.

Make sure the directory ownership and permissions are correct,
that is, that storage daemon is able to access it (R/W).

There are some differences regarding options such as Random Access,
Removable Media and Device Type.
As disk is not a sequential access type of media, you should set
the Random Access option to yes. Unlike tape, a disk is not
removable media so you may want to set RemovableMedia option
to no.

Regarding the volume size, I always chose size of 10 GB.

You will need to test whether there is a gain of employing
a SpoolDirectory for your disk based backup.
If your file system containing file volumes is being
mounted over a network (e.g. using iSCSI or NFS), it might
be a good idea to use a local spool directory.


Regards!


Hi Josip,

thanks for your input. Today, I am using 160GB virtual tape volumes with 
mhvtl and that seems to be a good trade off. 10GB volumes in the new 
setting would be a nightmare to manage. That would be around 3,000 
volumes! The volumes will be on a physical RAID set, but I will anyway 
use an SSD drive for spooling. One of the bottlenecks today is the slow 
throughput of the physical disks. It is not worth investing in the old 
rig, as it is anyway ripe for decommission.


Best regards,

Peter




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-21 Thread Peter Milesson via Bacula-users




On 21.01.2022 15:46, Josip Deanovic wrote:

On 2022-01-21 15:29, Peter Milesson via Bacula-users wrote:

thanks for your input. Today, I am using 160GB virtual tape volumes
with mhvtl and that seems to be a good trade off. 10GB volumes in the
new setting would be a nightmare to manage. That would be around 3,000
volumes! The volumes will be on a physical RAID set, but I will anyway
use an SSD drive for spooling. One of the bottlenecks today is the
slow throughput of the physical disks. It is not worth investing in
the old rig, as it is anyway ripe for decommission.


I have many thousands of file volumes and I don't see any problem
managing them. They are being managed by Bacula.


Regards!


Hi Josip,

Thanks for the information. I will have a look how to implement it.

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-25 Thread Peter Milesson via Bacula-users



On 23.01.2022 11:37, Radosław Korzeniewski wrote:

Hello,

pt., 21 sty 2022 o 14:22 Peter Milesson via Bacula-users 
 napisał(a):


If somebody has got experience with disk based, multi volume Bacula
backup, I would be grateful about some information (tips, what to
expect, pitfalls, etc.).


The best IMVHO (but not the only mine) is to configure one job = one 
volume. You will get no real benefit to limit the size of a single volume.
In the single volume = single job configuration you can set up job 
retention very easily as purging a volume will purge a single job only.
It is not required to "wait" a particular volume to fill up to start 
retention. Purging a volume affects a single job only. And finally you 
end up with a way less number of volumes then when limiting its size 
to i.e. 10G.


best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net


Hi Radosław,

thanks for your input. I'm still in the planning phase for the new 
backup server, so all information is valuable.


Excuse me some elementary questions. How do you define volumes in the 
pool for your configuration? I've been using Bacula a little more than 
11 years, but up till now I've used virtual tapes with fixed sizes. Once 
setup, it's just ticking, and the original configuration stays.


I use a one year retention scheme, and that will not change. I have got 
no reason to purge jobs within that period and after a year, I would 
like to do as little work as possible, having old volumes purged 
automatically, when there is no more room on the RAID array. Using 
tapes, this was automatic. When all volumes were full, the volume with 
the oldest write time is purged, and reused.


Best regards,

Peter
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-25 Thread Peter Milesson via Bacula-users



On 25.01.2022 9:31, Lionel PLASSE wrote:

I’ve made a rotation system like this

With one Bacula storage “backup” corresponding one Bacula device 
“Backup”<=> root directory /pool-bacula/ automounted by udev.

And 3 pool full, diff, incr attached to this device

I have 8 usb disk xfs formatted with a xfs label “Bacula” (better than 
uuid to configure udev rules) so udev can mount it automatically when 
plugged to the Debian system. I don't use Bacula for usb disk mounting 
task


On each disk I label one volume:
3 for each 3 month full rotation volumes ,
Four for each 4 weeks diff rotation
one for the daily incremental auto purged each week

By configuring the good retention period between week days and month 
and by adding the correct number on jobs volume on each one, you can 
easily configure a schedule with 3 steps ,
one for full attached to full pool on each first Wednesday a month for 
example,
one for diff attached to diff pool each week from 2nd Wednesday to 5th 
Wednesday

and the third for incremental each day from Monday to Thursday.

By this schedule you can keep a great number of backups so each day 
you can restore the previous until the Monday with incremental 
backups, each week kept by the differental backups and the 3 full 
montly too.


De : Radosław Korzeniewski 
Envoyé : dimanche 23 janvier 2022 11:37
À : Peter Milesson 
Cc : bacula-users@lists.sourceforge.net
Objet : Re: [Bacula-users] Virtual tapes or virtual disks

Hello,

pt., 21 sty 2022 o 14:22 Peter Milesson via Bacula-users 
<mailto:bacula-users@lists.sourceforge.net> napisał(a):

If somebody has got experience with disk based, multi volume Bacula
backup, I would be grateful about some information (tips, what to
expect, pitfalls, etc.).

The best IMVHO (but not the only mine) is to configure one job = one 
volume. You will get no real benefit to limit the size of a single volume.
In the single volume = single job configuration you can set up job 
retention very easily as purging a volume will purge a single job only.
It is not required to "wait" a particular volume to fill up to start 
retention. Purging a volume affects a single job only. And finally you 
end up with a way less number of volumes then when limiting its size 
to i.e. 10G.


best regards
--
Radosław Korzeniewski
mailto:rados...@korzeniewski.net

Hi Lionel,

thanks for the information. I've tried your way of doing backups after 
throwing out the physical tape drive. It just wasn't workable. The first 
thing is having office personnel properly managing the different USB 
drives (same problem as with tapes). Second, backup to USB drives takes 
forever, and there is not time enough for a full backup to complete 
comes Monday morning, not to speak of the few times there were backup 
glitches and the necessity to repeat the full backup. So the only 
solution for me is backup to a huge fixed storage. Now I'm planning how 
to implement volume management on this fixed storage.


Best regards,

Peter




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-26 Thread Peter Milesson via Bacula-users



On 26.01.2022 0:02, Josip Deanovic wrote:

On 23.01.2022 11:37, Radosław Korzeniewski wrote:

Hello,

pt., 21 sty 2022 o 14:22 Peter Milesson via Bacula-users 
 napisał(a):


    If somebody has got experience with disk based, multi volume Bacula
    backup, I would be grateful about some information (tips, what to
    expect, pitfalls, etc.).


The best IMVHO (but not the only mine) is to configure one job = one 
volume. You will get no real benefit to limit the size of a single 
volume.
In the single volume = single job configuration you can set up job 
retention very easily as purging a volume will purge a single job only.
It is not required to "wait" a particular volume to fill up to start 
retention. Purging a volume affects a single job only. And finally 
you end up with a way less number of volumes then when limiting its 
size to i.e. 10G.



There are many different approaches which can fit different requirements.
I don't see the benefit of having a single job per volume as Bacula
is tracking media, files, jobs and everything else.
That's why Bacula has a catalog which allows the backup system
to determine the location and state of volumes, jobs, files, etc.

To logically separate backup data I use pools and leave the rest
to Bacula.

When Bacula needs a particular file volume, if it's available Bacula
will simply use it and if it's not or if we are using tape volume
which is currently not in the tape drive/library, Bacula will ask
for the volume by name.

The number of smaller file volumes (e.g. 10GB) is not an issue as
Bacula is handling them correctly and automatically (provided that
Bacula is correctly configured, of course).


I'll go through few examples where smaller file volumes (e.g. 10GB)
could prove useful:

1. If the catalog database get corrupted or completely lost,
   due to the the small size, it's easier and faster to handle
   and determine volumes which contain database backup.
   That makes the process of importing the data into a new
   catalog database using a tool such as bscan easier.

2. Similar to 1), it is easier to manage small file volumes and
   extract particular jobs from a volume using bextract tool.

3. If the space is an issue (as it usually is), bigger volumes
   tend to eat more space which cannot be reused (volume
   cannot be recycled) as long as the volume contains a single
   job we want to preserve.

4. Although I don't like that approach, sometimes people chose
   to sync or copy whole file volumes to a secondary location
   using the usual tools such as rsync, cp and similar.
   In such case it is better to keep file volumes small.

5. When recycling a file volume, it will take longer time to
   wipe bigger file volume. If a volume is smaller it will
   take less time to recycle ensuring more time windows where
   other tasks could benefit from I/O performance. In case of
   large file volumes all other tasks would have to fight for
   the opportunity to access the file system and that gets more
   obvious when a slow network file system is being used.

6. In case of any kind of corruption of a file volume due to
   the file system corruption or damage in transport, it is
   likely that less data will be lost in case of a smaller
   file volume. And again, it's easier to handle smaller file
   volume when trying to recover pieces of data.


Regards!


Great Post!

Your way of explaining the reasoning of why to use smaller file volumes, 
is very appreciated. The truth is, most files are fairly small. 
Particularly files created by office users. They range from a few kbytes 
up to some tens of megabytes. Videos can be huge, but I guess most 
companies handle instruction videos and similar, and not full blown 
movies. This type of content very seldom exceed 1GB. So a 10Gbyte volume 
limit seems to be a good balance.


I'm used to fixed volume sizes from the tape drives, I feel comfortable 
with it, and I do not need to relearn a lot to configure the Bacula 
system. The only thing I haven't found out is how to preallocate the 
number of volumes needed. Maybe there is no need, if the volumes are 
created automagically. Most of the RAID array will be used by Bacula, 
just leaving a couple of percent as free space. When using mhvtl, I 
started a script with the tape size and number of tapes I wanted, and 
the corresponding tape directories and volumes were created on the fly.


Thanks Josip!

Best regards,

Peter


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-26 Thread Peter Milesson via Bacula-users




On 26.01.2022 18:42, dmitri maziuk wrote:

On 2022-01-26 11:06 AM, Peter Milesson via Bacula-users wrote:



...
Your way of explaining the reasoning of why to use smaller file 
volumes, is very appreciated. 

...

The only thing I haven't found out is how to preallocate the number 
of volumes needed. Maybe there is no need, if the volumes are created 
automagically. Most of the RAID array will be used by Bacula, just 
leaving a couple of percent as free space.


If you use actual disks as "magazines" with vchanger, you need to 
pre-label the volumes. If you use just one big filesystem, you can let 
bacula do it for you (last I looked that functionality didn't work w/ 
autochangers).


If you use disk "magazines" you also need to consider the whole-disk 
failure. If you use one big filesystem, use RAID (of course) to guard 
against those. But then you should look at the number of file volumes: 
some filesystems handle large numbers of directory entries better than 
others and you may want to balance the volume file size vs the number 
of directory entries.


For single filesystem, I suggest using ZFS instead of a traditional 
RAID if you can: you can later grow it on-line by replacing disks w/ 
bigger ones when (not if) you need to.


Dima


Thanks for your input Dima.

I'm having a RAID5 array of about 40TB in size. A separate RAID 
controller card handles the disks. I'm planning to use the normal ext4 
file system. It's standard and well known, most probably not the fastest 
though. That will not have any great impact, as there is a 4TB NVMe SSD 
drive, which takes the odd of the slow physical disk performance.


Best regards,

Peter




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error backing up after changing backup definitions

2023-01-17 Thread Peter Milesson via Bacula-users

Hi folks,

After changing a FileSet definition, removing one of the lines in 
bacula-dir.conf, and increasing the maximum number of volumes in the 
Pool, backups are no longer working.


The messages I get are:

17-Jan 14:30 Server1Fd JobId 837: Fatal error: Authorization key rejected by 
Storage daemon.
For help, please 
seehttp://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
17-Jan 14:30 Location1Dir JobId 837: Fatal error: Bad response to Storage 
command: wanted 2000 OK storage
, got 2902 Bad storage

17-Jan 15:00 Location1Dir JobId 838: Error: Director's connection to SD for 
this Job was lost.

When running bconsole, status for the director, the following line is 
displayed:


 838  Back Full  0 0  Server1    is waiting for Client 
Server1Fd to connect to Storage File1

No changes to names, passwords and similar have been made. When checking 
on the client with netstat, I can see that there is an established 
connection between the director and the file daemon.


There are no firewalls involved.

The versions:

Backup server - Debian Bullseye 11.6 with Bacula 11.0.5
Backup client 1 - CentOS 7.9 with Bacula 9.0.6-3.el7.centos

Best regards,

Peter
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error backing up after changing backup definitions

2023-01-17 Thread Peter Milesson via Bacula-users

Hi folks,

I think I have got it tracked down. The backup box is a multi-homed 
server and the Autochanger had got the wrong address. After changing the 
address, it's working.


I will however, post a new question about configuration on a multi-homed 
backup server.


Best regards,

Peter

On 17.01.2023 15:35, Peter Milesson via Bacula-users wrote:

Hi folks,

After changing a FileSet definition, removing one of the lines in 
bacula-dir.conf, and increasing the maximum number of volumes in the 
Pool, backups are no longer working.


The messages I get are:
17-Jan 14:30 Server1Fd JobId 837: Fatal error: Authorization key rejected by 
Storage daemon.
For help, please 
seehttp://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
17-Jan 14:30 Location1Dir JobId 837: Fatal error: Bad response to Storage 
command: wanted 2000 OK storage
, got 2902 Bad storage

17-Jan 15:00 Location1Dir JobId 838: Error: Director's connection to SD for 
this Job was lost.

When running bconsole, status for the director, the following line is 
displayed:

  838  Back Full  0 0  Server1    is waiting for Client 
Server1Fd to connect to Storage File1
No changes to names, passwords and similar have been made. When 
checking on the client with netstat, I can see that there is an 
established connection between the director and the file daemon.


There are no firewalls involved.

The versions:

Backup server - Debian Bullseye 11.6 with Bacula 11.0.5
Backup client 1 - CentOS 7.9 with Bacula 9.0.6-3.el7.centos

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] NDMP plugin for Bacula

2023-09-12 Thread Peter Milesson via Bacula-users

Hi folks,

Is there anybody having any experience with Bacula community edition and 
the NDMP plugin (commercial)? Does it work with the community edition?


I  need to backup data from a NDMP-compliant NAS, and it starts to get 
urgent. I contacted Bacula Systems about a month ago about this, but no 
reaction so far. Hopefully, I can get some more information from the 
community.


Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] NDMP plugin for Bacula

2023-09-12 Thread Peter Milesson via Bacula-users

Hi Radosław,

Thanks for the information.

I know that the NDMP-plugin is commercial, and we are willing to pay 
license fees (up to a reasonable amount, of course) for using it with 
Bacula community edition. I'm not interested in the full Bacula 
Enterprise edition, as it's a complete overkill in our case. Another 
aspect is migration. If forced to migrate our very well working Bacula 
installation, there are also other backup products to consider.


Though I'm quite annoyed that Bacula Enterprises hasn't bothered to 
contact me at all.


Best regards,

Peter

On 12.09.2023 18:00, Radosław Korzeniewski wrote:

Hello,

wt., 12 wrz 2023 o 13:48 Peter Milesson via Bacula-users 
 napisał(a):


Hi folks,

Is there anybody having any experience with Bacula community
edition and
the NDMP plugin (commercial)? Does it work with the community edition?


If you have access to Bacula Enterprise NDMP Plugin then use it with 
your Bacula Enterprise installation. Do not bother to force the Bacula 
Community client to use this plugin.

Just use Bacula Enterprise. This is the simplest possible solution.

From a technical point of view I know how to do it. But from a legal 
point of view you should not use this software in that way.



I  need to backup data from a NDMP-compliant NAS, and it starts to
get
urgent. 



Just use Bacula Enterprise. This is the simplest possible solution.

I contacted Bacula Systems about a month ago about this, but no
reaction so far. Hopefully, I can get some more information from the
community.


Ok, this changes the view.
I can forward your email to our sales to close the deal if you wish. 
We (Inteos) are very professional and we can help in every case.


Radek
--
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] NDMP plugin for Bacula

2023-09-12 Thread Peter Milesson via Bacula-users

Hi Radosław,

Thanks for the information. I will think about this. If the migration is 
so simple, it helps a lot.


I also finally got contacted by Bacula Systems.

Best regards,

Peter

On 12.09.2023 18:38, Radosław Korzeniewski wrote:

Hello,

wt., 12 wrz 2023 o 18:24 Peter Milesson via Bacula-users 
 napisał(a):


Hi Radosław,

Thanks for the information.

I know that the NDMP-plugin is commercial, and we are willing to
pay license fees (up to a reasonable amount, of course) for using
it with Bacula community edition.


If you want to use any Bacula Enterprise Plugin then you need to use 
Bacula Enterprise software. It won't work with Bacula Community which 
lack required API and has incompatible licence.


I'm not interested in the full Bacula Enterprise edition, as it's
a complete overkill in our case.


Why? The base (core) Bacula Enterprise has "basically" the same 
features build-in.


Another aspect is migration.


You can't find easier migration in the wild ever! You will use the 
same configuration files and the same catalog database (which requires 
a simple catalog schema upgrade similar to standard Bacula version 
upgrade). You can do this "in-place".


If forced to migrate our very well working Bacula installation,
there are also other backup products to consider.


I do not understand. I think you have mistaken the product. If your 
Bacula Community is well working then Bacula Enterprise will do the 
same or better.



Though I'm quite annoyed that Bacula Enterprises hasn't bothered
to contact me at all.


If you still want an offer for Bacula Enterprise we can prepare it for 
you. *Just say yes*. Without your confirmation no one will force you 
to get one.

Radek
--
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions re installing on Debian 11 Bullseye

2023-10-04 Thread Peter Milesson via Bacula-users

Hi folks,

Installing on Debian 11 is bound to fail, as the documentation and 
reality are not in sync. I had this problem yesterday, and spent quite 
some time figuring out what was wrong. About the package manager entry 
under /etc/apt/sources.list.d, the document mentioned below states:


"# Bacula Community
deb https://www.bacula.org/packages/@access-key@/debs/@bacula-version@ 
@debian-version@ main"


But a working entry should be (for amd64 architecture):

https://www.bacula.org/packages/@access-key@/debs/@bacula-version@/@debian-version@/amd64 
@debian-version@ main


Best regards,

Peter

On 04.10.2023 8:31, Davide F. via Bacula-users wrote:

Hi,

There’s a specific documentation for rpm and deb Bacula installation.

https://bacula.org/whitepapers/CommunityInstallationGuide.pdf

Hope it helps

Davide

On Wed, 4 Oct 2023 at 07:36 Vaughan Wickham  wrote:

Hello,

I’m trying to the follow the installation / setup instructions
here
https://www.bacula.org/13.0.x-manuals/en/main/Installing_Bacula.html

Unfortunately, I’m finding these docs hard to follow.

The ‘Installing Bacula’ section appears to focus primarily on
building Bacula from the source.

However, in my case, I have downloaded the Debian 11 binaries, so
I think this section is largely irrelevant in my scenario.

I have downloaded and installed the following packages:

sudo dpkg -i bacula-common_13.0.3-1_amd64.deb

sudo dpkg -i bacula-client_13.0.3-1_amd64.deb

sudo dpkg -i bacula-console_13.0.3-1_amd64.deb

sudo apt-get install dbconfig-common

sudo apt-get install dbconfig-pgsql

sudo apt-get install postgresql-contrib

sudo dpkg -i bacula-postgresql_13.0.3-1_amd64.deb

During the postgresql installation a menu appeared, and I was
prompted to perform some high-level database configuration.

So, to the best of my knowledge I have installed Bacula OK and
basic PostgreSQL configuration is done.

In the tutorial section
https://www.bacula.org/13.0.x-manuals/en/main/Brief_Tutorial.html

The first step is:

 1. cd 

But I don’t have an install directory, as I downloaded all of the
binaries. So, where should I be starting from?

Step 2. Start the Database.

According to systemctl: PostgreSQL service is active.

Step 3. Start the Daemons with ./bacula start

I get no such file or directory.

The files / folders that I do have are:

/opt/bacula

/opt/bacula/scripts/bacula

/var/log/bacula

/var/lib/bacula

Would appreciate some suggestions on what to do / try next.

Also, is there by chance another installation guide / tutorial
that is perhaps easier to follow than this guide. I’ve had a look
on Google and YouTube and while I have found some other resources,
they don’t seem to be the “answer”. So, for now, I’ve been
persisting with this guide.

Thanks

Regards,

Vaughan

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions re installing on Debian 11 Bullseye

2023-10-05 Thread Peter Milesson via Bacula-users

Hi Vaughan,

this article describes how to use the new methods to add gpg keys. The 
best instruction I have found so far. Works from Bullseye and up.


https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html

Best regards,

Peter

On 05.10.2023 5:48, Vaughan Wickham wrote:


Hello,

I’ve attempted to follow the documentation below (thanks to Davide for 
sharing)


https://bacula.org/whitepapers/CommunityInstallationGuide.pdf

However, step 4.2 (Import the GPG key) fails due to apt-key being 
deprecated. I also understand that there are potentially issues with 
the trusted.gpg.d replacement.


I was wondering, if it might be possible to download the binaries from 
the CommunityInstall site and then install them manually?


Regards,

Vaughan



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users