Re: [Bacula-users] Bacula 9.6.5 TLS issue - solved in 9.6.6

2020-10-05 Thread Phil Stracchino

On 9/28/20 12:33 PM, Phil Stracchino wrote:

On 2020-09-27 15:27, Phil Stracchino wrote:

I'm going to re-test the job-hanging problem that I encountered with
9.6.5 Director and see whether that is resolved in 9.6.6 as well.  It
mysteriously appeared between 9.6.3 and 9.6.5, with luck it has vanished
as mysteriously.


test phase 1:  All clients and Storage on 9.6.6, Director still on 9.6.3
No hung jobs so far.  I plan to leave it this way for at least a week
before upgrading the Director to 9.6.6. as well.


OK, a week of no issues and monthly full backups just ran.  I am now 
updating the Director from 9.6.3 to 9.6.6.  No other changes.


Fingers crossed.


--
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance challenges

2020-10-05 Thread Josh Fisher


On 10/5/20 9:20 AM, Žiga Žvan wrote:


Hi,
I'm having some performance challenges. I would appreciate some 
educated guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 
hours, old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup 
speed around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over 
nfs, to a local SSD disk attached on bacula server virtual machine). 
It took 2 hours 50 minutes to backup linux file server (instead of 3.5 
hours). Sequential write test tested with linux dd command shows write 
speed 300 MB/sec for IBM storage and 600 MB/sec for local SSD storage 
(far better than actual throughput).




There are directives to enable/disable spooling of both data and the 
attributes (metadata) being written to the catalog database. When using 
disk volumes, you probably want to disable data spooling and enable 
attribute spooling. The attribute spooling will prevent a database write 
after each file backed up and instead do the database writes as a batch 
at the end of the job. Data spooling would rarely if ever be needed when 
writing to dick media.


With attribute spooling enabled, you can make a rough guess as to 
whether DB performance is the problem by judging how long the job is in 
the 'attribute despooling' state, The status dir command in bconsole 
shows the job state.



The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However 
I have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on 
disk after retention period expires. If possible, I would like to 
delete just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that 
my incremental job gets promoted to full after monthly backup ("No 
prior Full backup Job record found"; because monthly backup is a 
seperate job, but bacula searches for full backups inside the same 
job). Could you please suggest a better configuration. If possible, I 
would like to keep central schedule definition (If I manipulate pools 
in a schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a 
daily/weekly/monthly pool per client. In a volume, there are 
fileparts belonging to expired backups (eg. part1-23 in the output 
bellow). I have tried to solve this with purge/prune scripts in my 
BackupCatalog job (as suggested in the whitepapers) but the data does 
not get deleted. Is there any way to delete fileparts? Should I 
create separate volumes after retention period? Please suggest a 
better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed 
in sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula 

Re: [Bacula-users] Baculum 9.6.5.1

2020-10-05 Thread Martin Simmons
Well spotted -- I found an old 7.4.4 bconsole and the sqlquery command does
indeed cause bconsole to exit when connected to a 9.6.6 director.

__Martin


> On Mon, 5 Oct 2020 06:12:14 +0200, Marcin Haba said:
> 
> Hello Elias,
> 
> Your bconsole is 7.4.x version or earlier. I know it isn't requirement
> but here it makes a problem.
> 
> Could you try to update your bconsole to 9.6.6 version?
> 
> Best regards,
> Marcin Haba (gani)
> 
> On Fri, 2 Oct 2020 at 22:23, Elias Pereira  wrote:
> >
> > Outputs in pastebin.
> >
> > https://pastebin.com/u81FRDu8
> >
> > On Fri, Oct 2, 2020 at 4:57 PM Martin Simmons  wrote:
> >>
> >> It looks like bconsole is broken.  Does it work with other commands,
> >> e.g. restore?
> >>
> >> The code doesn't contain many other useful debug messages, but please run
> >> again with -d900 instead of -d400.
> >>
> >> Also, please run bconsole with -d900 as well.
> >>
> >> __Martin
> >>
> >>
> >> > On Fri, 2 Oct 2020 13:47:54 -0300, Elias Pereira said:
> >> >
> >> > Ok. I even tested giving enter after sqlquery, but it leaves the 
> >> > bconsole.
> >> >
> >> > I run console in one terminal and bacula-dir -d400 in another. Here the
> >> > output.
> >> > bconsole:
> >> >
> >> > Connecting to Director 200.132.218.178:9101
> >> > 1000 OK: 103 bacula.sertao.ifrs.edu.br-dir Version: 9.6.6 (20 September
> >> > 2020)
> >> > Enter a period to cancel a command.
> >> > *sqlquery
> >> > Automatically selected Catalog: MyCatalog
> >> > Using Catalog "MyCatalog"
> >> > root@bacula:/etc/bacula#
> >> >
> >> > bacula-dir -d400:
> >> >
> >> > root@bacula:/home/ifrs# bacula-dir: bsock.c:851-0 socket=6 who=client
> >> > host=xxx.xxx.xxx.xxx port=47096
> >> > bacula-dir: job.c:1767-0 wstorage=FreeNAS1
> >> > bacula-dir: job.c:1776-0 wstore=FreeNAS1 where=Job resource
> >> > bacula-dir: job.c:1430-0 JobId=0 created 
> >> > Job=-Console-.2020-10-02_13.43.32_08
> >> > bacula-dir: cram-md5.c:69-0 send: auth cram-md5 challenge 
> >> > <1943621602.1601657012@bacula-dir> ssl=0
> >> > bacula-dir: cram-md5.c:133-0 cram-get received: auth cram-md5 
> >> > <326090878.1601657012@bconsole> ssl=0
> >> > bacula-dir: cram-md5.c:157-0 sending resp to challenge: 
> >> > +5/WIh+jd3/Hx4/XN4UOoC
> >> > bacula-dir: ua_dotcmds.c:177-0 Cmd: .help all
> >> > bacula-dir: ua_cmds.c:2613-0 UA Open database
> >> > bacula-dir: mysql.c:119-0 db_init_database first time
> >> > bacula-dir: mysql.c:224-0 mysql_init done
> >> > bacula-dir: mysql.c:263-0 mysql_real_connect done
> >> > bacula-dir: mysql.c:265-0 db_user=bacula db_name=bacula 
> >> > db_password=xxx
> >> > bacula-dir: mysql.c:301-0 opendb ref=1 connected=1 db=7fcaec005360
> >> > bacula-dir: ua_cmds.c:2662-0 DB bacula opened
> >> > bacula-dir: mysql.c:325-0 closedb ref=0 connected=1 db=7fcaec005360
> >> > bacula-dir: mysql.c:332-0 close db=7fcaec005360
> >> > bacula-dir: job.c:1466-0 Start dird free_jcr
> >> > bacula-dir: mem_pool.c:372-0 garbage collect memory pool
> >> > bacula-dir: job.c:1522-0 End dird free_jcr
> >> >
> >> >
> >> > On Fri, Oct 2, 2020 at 12:56 PM Martin Simmons  
> >> > wrote:
> >> >
> >> > > This output makes no sense to me.
> >> > >
> >> > > Firstly, you need to put the SELECT on a separate line.  It should look
> >> > > like
> >> > > this:
> >> > >
> >> > > Enter a period to cancel a command.
> >> > > *sqlquery
> >> > > Automatically selected Catalog: MyCatalog
> >> > > Using Catalog "MyCatalog"
> >> > > Entering SQL query mode.
> >> > > Terminate each query with a semicolon.
> >> > > Terminate query mode with a blank line.
> >> > > Enter SQL query: SELECT * FROM Path WHERE Path='';
> >> > > No results to list.
> >> > > Enter SQL query:
> >> > > End query mode.
> >> > > *
> >> > >
> >> > > Secondly, even if I enter it all on one line like you did, it still 
> >> > > prints
> >> > > "Entering SQL query mode." etc, but your output just contains the shell
> >> > > prompt.  Does bconsole crash?
> >> > >
> >> > > __Martin
> >> > >
> >> > >
> >> > > > On Fri, 2 Oct 2020 10:51:05 -0300, Elias Pereira said:
> >> > > >
> >> > > > Enter a period to cancel a command.
> >> > > > *sqlquery SELECT * FROM Path WHERE Path='';
> >> > > > Automatically selected Catalog: MyCatalog
> >> > > > Using Catalog "MyCatalog"
> >> > > > root@bacula:~#
> >> > > >
> >> > > > Enter a period to cancel a command.
> >> > > > *sqlquery SELECT * FROM Path WHERE PathId=92295;
> >> > > > Automatically selected Catalog: MyCatalog
> >> > > > Using Catalog "MyCatalog"
> >> > > > root@bacula:/home/ifrs# cd
> >> > > > root@bacula:~#
> >> > > >
> >> > > > Enter a period to cancel a command.
> >> > > > *sqlquery SELECT 'D', tmp.PathId, 0, tmp.Path, JobId, LStat, FileId,
> >> > > > FileIndex FROM (SELECT PathHierarchy.PPathId AS PathId, '..' AS Path 
> >> > > > FROM
> >> > > > PathHierarchy JOIN PathVisibility USING (PathId) WHERE
> >> > > PathHierarchy.PathId
> >> > > > = 92295 AND PathVisibility.JobId IN (17911) UNION SELECT 92295 AS 
> >> > > > PathId,
> >> > > > '.' AS 

[Bacula-users] performance challenges

2020-10-05 Thread Žiga Žvan

Hi,
I'm having some performance challenges. I would appreciate some educated 
guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup speed 
around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over nfs, 
to a local SSD disk attached on bacula server virtual machine). It took 
2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
Sequential write test tested with linux dd command shows write speed 300 
MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
than actual throughput).


The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However I 
have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete 
just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate 
job, but bacula searches for full backups inside the same job). Could 
you please suggest a better configuration. If possible, I would like 
to keep central schedule definition (If I manipulate pools in a 
schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a daily/weekly/monthly 
pool per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve 
this with purge/prune scripts in my BackupCatalog job (as suggested in 
the whitepapers) but the data does not get deleted. Is there any way 
to delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacula disk 

Re: [Bacula-users] ERR=Function not implemented

2020-10-05 Thread Kern Sibbald

  
  
Hello,

Sven gave a very nice comment.  However, please note that the error
message was generated by the
operating system, and though it might be a bit confusing in this
context, it is probably reasonable, because
Bacula was apparently attempting something not permitted on that
"virtual" file.

Best regards,
Kern

On 10/4/20 6:46 PM, Marc Chamberlin via
  Bacula-users wrote:


  
  Hello -  I am running Bacula Version: 9.6.6 on OpenSuSE Leap
15.2 and I just set it up to backup my laptop. It worked except
that I got tons of warning messages such as the following -
  04-Oct 03:05 bigbang-fd JobId 1782: Error: Read error on file /sys/kernel/slab/:d-0001024/alloc_calls. ERR=Function not implemented
04-Oct 03:05 bigbang-fd JobId 1782: Error: Read error on file /sys/kernel/slab/:d-0001024/free_calls. ERR=Function not implemented
04-Oct 03:05 bigbang-fd JobId 1782: Error: Read error on file /sys/kernel/slab/TCPv6/alloc_calls. ERR=Function not implemented
04-Oct 03:05 bigbang-fd JobId 1782: Error: Read error on file /sys/kernel/slab/TCPv6/free_calls. ERR=Function not implemented

I took a look at these files and the only thing of interest is that these are all empty files that were installed by the distro. But these files do exist and do have the same ownership/permissions that other files have that are successfully backed up.  Any ideas on how to clear up this sea of warning messages I am getting?

FWIW - Seems to me that the developers could have come up with a better ERR message. Like saying what function was not implemented, or explaining why this failure is occurring. Something to give the users a better idea on how to handle/fix the problem. Or at a minimum telling the user that a serious problem occurred and to contact the developers if no solution is available for users. IMHO of course...

Thanks in advance for helping me with this,  Marc...

  -- 
_   _   .   .   .       .   .   .   _   _       .   _  
_   _   _   .       .   .   .           _   .   .       .  
        .   _   _       .   _       _   _   .   .   .      
.   _   _   .       _   .   .   _       .   _   _          
_   _       .   _       .   _   .       _   .   _   . 
Computers: the final frontier. These are the voyages of the
  user Marc. 
  His mission: to explore strange new hardware. To seek out new
  software and new applications. To boldly go where no Marc has
  gone before!
(Attached is my public key to be used for encryption and
  sending encrypted email to m...@marcchamberlin.com.)
  
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



  


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users