Re: [Bacula-users] EXTERNAL - Re: /var/lib/mysql at 100%

2020-07-08 Thread Chaz Vidal
Just wanted to say thank you Phil for the pointers on this.

I have moved the database directory for now and working on increasing the 
database filesystem disk space.

Cheers
Chaz


Chaz Vidal | ICT Infrastructure | Tel: +61-8-8128-4397 | Mob: +61-492-874-982 | 
chaz.vi...@sahmri.com

-Original Message-
From: Chaz Vidal  
Sent: Monday, 6 July 2020 2:22 PM
To: Phil Stracchino 
Cc: bacula-users 
Subject: Re: [Bacula-users] EXTERNAL - Re: /var/lib/mysql at 100%

Very very much appreciated Phil!  I'll schedule something to fix this up and we 
will ensure we remediate this system.
I'll let you know how we go.


Cheers
Chaz



-Original Message-
From: Phil Stracchino  
Sent: Monday, 6 July 2020 12:10 PM
To: Chaz Vidal 
Cc: bacula-users 
Subject: Re: EXTERNAL - Re: [Bacula-users] /var/lib/mysql at 100%

On 2020-07-05 19:45, Chaz Vidal wrote:
> Thanks Phil,
> For some reason my backups are still running and completing.  Is it because 
> we still have table space?


Probably, yes.  It most likely means you have enough free space in the 
tablespace to keep going for now.

> This is an ext4 filesystem. I have attempted a dump whilst bacula was running 
> and the resulting dump file was about 53% (70GB) of the existing Bacula db 
> directory which is 130GB.


OK, but that does not mean that you have 60GB of free space to play with.  Your 
dump contains only the data and schemas of your database, not the contents of 
the table indexes or unused column space.  The indexes will be rebuilt by 
mysqld when you reload the data from your dump.  A good set of indexes on a 
large database consumes significant space. but it is the voluminous and 
rapidly-traversable indexes that allow you to retrieve data from your database 
quickly.  Indexes are at the heart of what makes a relational database better 
and faster than a flat file.  They are what tells the database engine exactly 
where in your 70GB of data a single specific piece of data is stored.


> 
> The spool directory is so much larger with 6TB available.  Is this a 
> potential solution?
> 
> I'm starting out with Bacula so I'm assuming that we do a "/etc/init.d/bacula 
> stop", run the database dump commands and then start?


If you're referring to the spool directory for job spooling, it might be unwise 
to also put your MariaDB data directory there unless you first change your 
high-water setting to make sure that spooled jobs cannot encroach on your 
database space.  With that proviso, you could do it as a temporary measure 
until you can make more permanent provisions to expand your database space.

Here's the fastest way to accomplish the move:

1.  Stop Bacula
2.  Stop MariaDB
3.  Create your new data directory
4.  Move or copy the entire contents of /var/lib/mysql to the new directory.  
Make sure the new directory has the same ownership and permissions as 
/var/lib/mysql.
5.  Edit the global MariaDB configuration files to point datadir to the new 
location.  Also update anything else in the MariaDB configuration that refers 
to /var/lib/mysql (tmpdir for instance).
6.  Start MariaDB and make sure it comes up correctly 7.  Edit your Bacula 
configuration files to set a safe spool high-water mark 8.  Restart Bacula


This does not touch on any optimization, tuning, enabling file-per-table, or 
getting rid of the unwanted lost+found in the data directory, but it will get 
you back up and running much more quickly than reloading 70GB of data from a 
logical dump and does not risk missing anything during a dump-and-reload.  You 
can prepare plans to deal with those issues later.


Remember that this is a stopgap solution until you can permanently expand your 
dedicated database storage.  A couple of things to remember
there:
— XFS is preferred over ext* for performance reasons (in fact, further 
development of the ext filesystem has been officially abandoned by Red Hat); — 
The MySQL/.MariaDB data directory does not have to be located at 
/var/lib/mysql, that's just the default; — It is advised NOT to use the root of 
an ext* volume as your data directory (for example, if your data volume is ext4 
mounted at /mariadb, make your data directory something like /mariadb/data) — 
Whenever possible, database storage should be on physically separate devices 
from OS loads or Bacula usage to minimize I/O contention on the database.



--
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Community binaries on Fedora 31: libreadline.so.7 not available

2020-07-08 Thread Phil Stracchino
On 2020-07-08 02:54, Davide Franco wrote:
> Hello Phil,
> 
> You dont need a dev environment as I have uploaded Bacula 9.6.5 rpms
> packages.


Got the Fedora31 9.6.5 client installed today with no issues (except
moving the config file from /etc/bacula to /opt/bacula/etc).

Thanks again Davide!


-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula - optimize storage for cloud sync

2020-07-08 Thread Kern Sibbald

  
  
Hello Ziga,
Yes, you might be able to do what you want using a "debug"
  feature of the Bacula Cloud driver.  It is not very well
  documented, but there is one section "3 File Driver for the Cloud"
  in the "Bacula Cloud Backup" that mentions it.
Basically instead of using the "S3" driver in the Cloud resource
  of your Storage Daemon, you use "File" and the HostName becomes
  the path where the Cloud volumes (directories + parts) will be
  written.  For example, I use the following for writing to disk
  instead of an S3 cloud.
Cloud {
    Name = DummyCloud
    Driver = "File"
    HostName = "/home/kern/bacula/k/regress/tmp/cloud"
    BucketName = "DummyBucket"
    AccessKey = "DummyAccessKey"
    SecretKey = "DummySecretKey"
    Protocol = HTTPS
    UriStyle = VirtualHost
  }

The Device resource looks like:
Device {
    Name = FileStorage1
    Media Type = File1
    Archive Device = /home/kern/bacula/k/regress/tmp
    LabelMedia = yes;   # lets Bacula label
  unlabelled media
    Random Access = Yes;
    AutomaticMount = yes;   # when device opened, read
  it
    RemovableMedia = no;
    AlwaysOpen = no;
  
    Device Type = Cloud
  
    Cloud = DummyCloud
  }

I know the code runs and produces correct output, but I am not
  sure how it will work in your environment.  If it works, great. 
  If it doesn't work, for the near future, unfortunately I cannot
  provide support, but at some point (probably 3-6 months) the
  project may support this feature.
Note: the next version of Bacula coming in a few months will have
  a good number of new features and improvements to the Cloud driver
  (more bconsole commands for example).

Good luck, and best regards,
Kern
PS: If it does work for you, I would appreciate it if you would
  document it and publish it on the bacula-users list so others can
  use the Oracle cloud.




On 7/8/20 3:41 PM, Žiga Žvan wrote:


  
  Hi Mr. Kern,
  
  My question was a bit different. I have noticed that Oracle S3
is not compatible, therefore I have implemented Oracle Storage
gateway (a docker image that uses local filesystem as a cache
and moves the data automatically to oracle cloud). I have this
filesystem mounted (nfsv4) on bacula server and I am able to
backup data to this storage (and hence in cloud).
  I have around 1 TB data daily and I'm a bit concerned about the
bandwidth. It will take app. 4 hours to sync to the cloud and I
need to count in the future growth. As long as bacula writes
data to one file/volume, where it stores full and incremental
backups, this is not optimal for the cloud (the file will change
and all the data will upload each day). I have noticed that
bacula stores data differently in the cloud configuration.
Volume is not a file, but a folder with fileparts. This would be
better for me, because only some fileparts would change and move
to the cloud via Storage gateway. So the question is:
Can I configure bacula-sd to store data in fileparts, without
actual cloud sync? Is this possible? I have tried several
configurations of a bacula-sd device with no luck. Should  I
configure some dummy cloud resource?
  Kind regards,
   Ziga Zvan
  
  
  On 07/07/2020 14:40, Kern Sibbald
wrote:
  
  

Hello,
Oracle S3 is not compatible with Amazon S3 or at least with
  the libs3 that we use to interface to AWS and other compatible
  S3 cloud offerings.  

Yes, Bacula Enterprise has a separate Oracle cloud driver
  that they wrote.  There are no plans at the moment to backport
  it to the community version.
Best regards,
Kern

On 7/7/20 8:43 AM, Žiga Žvan wrote:


  
  

  
  Dear all,
  I'm testing communty version of bacula in order to
change backup sw for app. 100 virtual and physical
hosts. I would like to move all the data to local
storage and then move them to public cloud (Oracle
Object storage).

I believe that community version of the software suites
our needs. I have installed:
-version 9.6.5 of bacula on centos 7 computer
-oracle storage gateway (similar to aws SG - it moves
data to object storage and exposes it localy as nfsv4;
for bacula this is backup destination).
  I have read this two documents regarding bacula and
 

Re: [Bacula-users] Community binaries on Fedora 31: libreadline.so.7 not available

2020-07-08 Thread Phil Stracchino
On 2020-07-08 02:54, Davide Franco wrote:
> Hello Phil,
> 
> You dont need a dev environment as I have uploaded Bacula 9.6.5 rpms
> packages.
> 
> Fedora 32 packages will come soon.
> 
> Best regards
> 
> Davide

Thanks Davide.


-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula - optimize storage for cloud sync

2020-07-08 Thread Žiga Žvan

Hi Mr. Kern,

My question was a bit different. I have noticed that Oracle S3 is not 
compatible, therefore I have implemented Oracle Storage gateway (a 
docker image that uses local filesystem as a cache and moves the data 
automatically to oracle cloud). I have this filesystem mounted (nfsv4) 
on bacula server and I am able to backup data to this storage (and hence 
in cloud).


I have around 1 TB data daily and I'm a bit concerned about the 
bandwidth. It will take app. 4 hours to sync to the cloud and I need to 
count in the future growth. As long as bacula writes data to one 
file/volume, where it stores full and incremental backups, this is not 
optimal for the cloud (the file will change and all the data will upload 
each day). I have noticed that bacula stores data differently in the 
cloud configuration. Volume is not a file, but a folder with fileparts. 
This would be better for me, because only some fileparts would change 
and move to the cloud via Storage gateway. So the question is:
Can I configure bacula-sd to store data in fileparts, without actual 
cloud sync? Is this possible? I have tried several configurations of a 
bacula-sd device with no luck. Should  I configure some dummy cloud 
resource?


Kind regards,

Ziga Zvan


On 07/07/2020 14:40, Kern Sibbald wrote:


Hello,

Oracle S3 is not compatible with Amazon S3 or at least with the libs3 
that we use to interface to AWS and other compatible S3 cloud offerings.


Yes, Bacula Enterprise has a separate Oracle cloud driver that they 
wrote.  There are no plans at the moment to backport it to the 
community version.


Best regards,

Kern

On 7/7/20 8:43 AM, Žiga Žvan wrote:



Dear all,

I'm testing communty version of bacula in order to change backup sw 
for app. 100 virtual and physical hosts. I would like to move all 
the data to local storage and then move them to public cloud (Oracle 
Object storage).


I believe that community version of the software suites our needs. I 
have installed:

-version 9.6.5 of bacula on centos 7 computer
-oracle storage gateway (similar to aws SG - it moves data to object 
storage and exposes it localy as nfsv4; for bacula this is backup 
destination).


I have read this two documents regarding bacula and cloud
https://blog.bacula.org/whitepapers/CloudBackup.pdf
https://blog.bacula.org/whitepapers/ObjectStorage.pdf

It is mentioned it the document above, that Oracle Object storage is 
not supported at the moment.
Is it possible to *configure* bacula Storage device in a way that 
uses *Cloud media format* (directory with file parts as a volume, 
instead of a single file as a volume) *without actual cloud sync* 
(Storage Gateway does this in my case)? I am experimenting with 
variations of the definition bellow, but I am unable to solve this 
issue for now (it tries to initialize cloud plugin or it writes to a 
file, instead of a directory).


Device {
  Name = FSOciCloudStandard
#  Device type = Cloud
  Device type = File
#  Cloud = OracleViaStorageGateway
  Maximum Part Size = 100 MB
#  Media Type = File
  Media Type = CloudType
  Archive Device = /mnt/baculatest_standard/backup
  LabelMedia = yes;   # lets Bacula label unlabeled 
media

  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Is there any plan to support oracle object storage in near future? It 
has S3 compatible API and bacula enterprise supports it...

Kind regards,
Ziga Zvan


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users