Re: [Bacula-users] Error on Virt Full backup

2019-01-19 Thread Kern Sibbald

Can you send the output for version 9.4.1 showing the failure?

Regards,
Kern

On 1/11/19 12:07 PM, i...@alink.biz wrote:

Hello Kern,

we've updated to 9.4.1 and the situation is the same.

Do you have any recommendations how to identify the reason for such an
error?

Regards.


Tuesday, January 8, 2019, 11:19:20 PM:

KS> Hello,

KS> I would recommend that you consider doing an upgrade to 9.4.1, because
KS> from 9.0.x to 9.4.x, we have
KS> fixed a number of bugs with Virtual full backups, and significantly
KS> improved how it works, particularly
KS> when using virtual full in conjunction with accurate mode.

KS> Best regards,
KS> Kern

KS> On 1/8/19 2:42 AM, i...@alink.biz wrote:

Hello,

while the last virtual full backup Bacula exits with the following
error, anyone with suggestions what may cause this and how to fix it?

1. after several OK reports in the log like this:
08-Jan 08:04 Storage JobId 22340: Recycled volume "VOL0064" on File device 
"vDrive-1" (/home/bacula), all previous data lost.
08-Jan 08:04 Storage JobId 22340: New volume "VOL0064" mounted on device 
"vDrive-1" (/home/bacula) at 08-Jan-2019 08:04.
08-Jan 08:05 Storage JobId 22340: End of Volume "VOL0655" at addr=5368693664 on device 
"vDrive-4" (/home/bacula).
08-Jan 08:05 Storage JobId 22340: Ready to read from volume "VOL0603" on File device 
"vDrive-4" (/home/bacula).
08-Jan 08:05 Storage JobId 22340: Forward spacing Volume "VOL0603" to addr=217
08-Jan 08:05 Storage JobId 22340: End of Volume "VOL0603" at addr=1195665354 on device 
"vDrive-4" (/home/bacula).

2. may be at the end of the procedure follows this error:
08-Jan 08:05 Storage JobId 22340: Elapsed time=03:05:54, Transfer rate=24.16 M 
Bytes/second
08-Jan 08:05 Storage JobId 22340: Sending spooled attrs to the Director. 
Despooling 911,828,525 bytes ...
08-Jan 08:06 Director JobId 22340: Fatal error: sql.c:591 Path length is zero. 
File=
08-Jan 08:06 Director JobId 22340: Error: Bacula Director 9.0.6 (20Nov17):
Build OS:   x86_64-redhat-linux-gnu redhat
...
Backup Level:   Virtual Full
...
Elapsed time:   2 hours 1 min 27 secs
Priority:   10
SD Files Written:   0
SD Bytes Written:   0 (0 B)
Rate:   0.0 KB/s
Volume name(s): 
VOL0705|VOL0707|VOL0708|VOL0734|VOL0735|VOL0736|VOL0737|VOL0738|VOL0739|VOL0740|VOL0741|VOL0742|VOL0743|VOL0744|VOL0745|VOL0755|VOL0751|VOL0753|VOL0756|VOL0758|VOL0026|VOL0027|VOL0028|VOL0029|VOL0030|VOL0031|VOL0043|VOL0044|VOL0045|VOL0046|VOL0047|VOL0048|VOL0049|VOL0050|VOL0051|VOL0052|VOL0053|VOL0241|VOL0242|VOL0054|VOL0055|VOL0056|VOL0381|VOL0057|VOL0058|VOL0059|VOL0060|VOL0061|VOL0062|VOL0063|VOL0064
Volume Session Id:  131
Volume Session Time:1546269164
Last Volume Bytes:  1,470,626,226 (1.470 GB)
SD Errors:  0
SD termination status:  SD despooling Attributes
Termination:*** Backup Error ***





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with volume are not present when the tape is empty on incremental

2019-01-19 Thread Kern Sibbald

  
  
On a quick look this seems normal.  Bacula will not list volumes
that have no files stored on them.

Kern

On 1/10/19 2:39 PM, Olivier Delestre
  wrote:


  
  hi,
  
  i use bacula v9.2.2 and storage on disk.
  i notice that when an incremantal is empty, bacula make a tape
  with a file ( 612 bytes ).
  when you list with the query "5: List all backups for a Client" ,
  this volume is not present. but present in the pool and the file
  system.
  see below on the vol conty-2546
  note : the query is issue from sample-query.sql
  
  How to have this volume in my list ?
  A Bug or not ??
  
  Thanks
  
  Choose a query (1-21): 5
  Enter Client Name: conty-fd
++--+---+---+-+--+++
  | jobid  | client   | fileset   | level | starttime  
  | jobfiles | jobbytes   | volumename |
++--+---+---+-+--+++
  | 15,276 | conty-fd | fileset_conty | F | 2018-12-01 22:00:02
  |    8,121 | 11,532,984,139 | conty-2471 |
  | 15,323 | conty-fd | fileset_conty | I | 2018-12-03 22:00:02
  |    1 |    221,917 | conty-2481 |
  | 15,540 | conty-fd | fileset_conty | I | 2018-12-10 22:00:02
  |    1 |    222,648 | conty-2487 |
  | 15,683 | conty-fd | fileset_conty | I | 2018-12-14 22:00:03
  |    1 |  0 | conty-2503 |
  | 15,755 | conty-fd | fileset_conty | I | 2018-12-17 22:00:02
  |    1 |    223,377 | conty-2510 |
  | 15,977 | conty-fd | fileset_conty | I | 2018-12-24 22:00:02
  |    1 |    224,106 | conty-2514 |
  | 16,198 | conty-fd | fileset_conty | I | 2018-12-31 22:00:01
  |    1 |    224,835 | conty-2529 |
  | 16,374 | conty-fd | fileset_conty | F | 2019-01-05 22:02:31
  |    8,120 | 11,532,987,786 | conty-2401 |
  | 16,421 | conty-fd | fileset_conty | I | 2019-01-07 22:00:02
  |    1 |    225,566 | conty-2407 |
  | 16,495 | conty-fd | fileset_conty | I | 2019-01-09 22:00:02
  |  226 | 10,177,801,880 | conty-2412 |
++--+---+---+-+--+++
  *list media pool=conty
+-++---+-++--+--+-+--+---+---+-+--+-+---+
  | mediaid | volumename | volstatus | enabled | volbytes   |
  volfiles | volretention | recycle | slot | inchanger | mediatype |
  voltype | volparts | lastwritten | expiresin |
+-++---+-++--+--+-+--+---+---+-+--+-+---+
  |   2,401 | conty-2401 | Used  |   1 | 11,542,823,537
  |    2 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2019-01-05 22:05:03 | 5,124,733 |
  |   2,404 | conty-2404 | Used  |   1 |    612
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2019-01-06 22:00:02 | 5,210,832 |
  |   2,407 | conty-2407 | Used  |   1 |    226,476
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2019-01-07 22:00:02 | 5,297,232 |
  |   2,410 | conty-2410 | Used  |   1 |    612
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2019-01-08 22:00:02 | 5,383,632 |
  |   2,412 | conty-2412 | Used  |   1 | 10,185,386,134
  |    2 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2019-01-09 22:00:34 | 5,470,064 |
  |   2,415 | conty-2415 | Recycle   |   1 |  1
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2018-11-12 22:00:40 |   458,870 |
  |   2,418 | conty-2418 | Purged    |   1 | 56,360,634
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2018-11-13 22:00:07 |   545,237 |
  |   2,422 | conty-2422 | Purged    |   1 |  9,601,588,270
  |    2 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2018-11-14 22:00:41 |   631,671 |
  |   2,426 | conty-2426 | Purged    |   1 | 51,766,177
  |    0 |    5,529,600 |   1 |    0 | 0 | File 
  |   1 |    0 | 2018-11-15 22:00:03 |   718,033 |
  |   2,429 | conty-2429 | Purged    |   1 | 51,991,866
  |    0 |    5,529,600 |   1 | 

Re: [Bacula-users] Bacula 9.4 S3 plugin

2019-01-19 Thread Kern Sibbald

  
  
Hello,

Hopefully, you have solved this.  If not, I need a copy of the
traceback file (should be in /opt/bacula/working) to go
any further with this.

Hopefully bacula.org binaries will be available shortly, which might
solve the problem.

Best regards,
Kern

On 1/9/19 6:51 PM, Dante F. B. Colò
  wrote:


  
  Hi Kern
  Sorry , i didn't see the download url provided on bacula site
for the libs3, it compiles without errors now , but i have a
problem when try to list volumes on s3 storage the bacula-sd
process dies returning a "Segmentation Violation"  as shown
below , i 'm using CentOS 7 but i also tried on Debian 9
installed the binary packages from Bacula Community oficial repo
but got the same thing . I'm posting the output of bacula-sd
running until die . Any suggestion ? 
  
  Best Regards
   Dante Colò
  
  
  
  root@debian9VM01:/opt/bacula/bin# ./bacula-sd -d100 -dt -v -f
09-Jan-2019 13:30:33 bacula-sd: address_conf.c:274-0 Initaddr
0.0.0.0:9103 
09-Jan-2019 13:30:33 am-pdc-sd: jcr.c:131-0 read_last_jobs seek
to 192
09-Jan-2019 13:30:33 am-pdc-sd: jcr.c:138-0 Read num_items=0
09-Jan-2019 13:30:33 am-pdc-sd: plugins.c:97-0 load_plugins
09-Jan-2019 13:30:33 am-pdc-sd: plugins.c:133-0 Rejected plugin:
want=-sd.so name=bacula-sd-cloud-driver-9.4.1.so len=31
09-Jan-2019 13:30:33 am-pdc-sd: plugins.c:133-0 Rejected plugin:
want=-sd.so name=bpipe-fd.so len=11
09-Jan-2019 13:30:33 am-pdc-sd: plugins.c:133-0 Rejected plugin:
want=-sd.so name=bacula-sd-aligned-driver-9.4.1.so len=33
09-Jan-2019 13:30:33 am-pdc-sd: plugins.c:121-0 Failed to find
any plugins in /opt/bacula/plugins
09-Jan-2019 13:30:33 am-pdc-sd: stored.c:608-0 calling init_dev
volume_local_1
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:152-0 Num drivers=15
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:165-0 loadable=0
type=1 loaded=1 name=file handle=0
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:391-0 init_dev:
tape=0 dev_name=/backup/bacula/data
09-Jan-2019 13:30:33 am-pdc-sd: stored.c:610-0 SD init done
volume_local_1
09-Jan-2019 13:30:33 am-pdc-sd: acquire.c:663-0 Attach 0xc001df8
to dev "volume_local_1" (/backup/bacula/data)
09-Jan-2019 13:30:33 am-pdc-sd: stored.c:608-0 calling init_dev
wasabi_s3_1
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:152-0 Num drivers=15
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:165-0 loadable=1
type=14 loaded=0 name=cloud handle=0
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:430-0 loadable=1
type=14 loaded=0 name=cloud handle=0
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:435-0 Open SD driver
at /opt/bacula/plugins/bacula-sd-cloud-driver-9.4.1.so
09-Jan-2019 13:30:33 am-pdc-sd: bnet_server.c:86-0 Addresses
0.0.0.0:9103 
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:438-0 Driver=cloud
handle=7ff00c001f90
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:440-0 Lookup
"BaculaSDdriver" in driver=cloud
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:442-0 Driver=cloud
entry point=7ff012d4ddd0
09-Jan-2019 13:30:33 am-pdc-sd: init_dev.c:391-0 init_dev:
tape=0 dev_name=/backup/bacula/cloud
09-Jan-2019 13:30:33 am-pdc-sd: stored.c:610-0 SD init done
wasabi_s3_1
09-Jan-2019 13:30:33 am-pdc-sd: acquire.c:663-0 Attach 0xc003e28
to dev "wasabi_s3_1" (/backup/bacula/cloud)
09-Jan-2019 13:30:53 am-pdc-sd: bsock.c:851-0 socket=6
who=client host=172.17.198.11 port=47292
09-Jan-2019 13:30:53 am-pdc-sd: dircmd.c:188-0 Got a DIR
connection at 09-Jan-2019 13:30:53
09-Jan-2019 13:30:53 am-pdc-sd: cram-md5.c:69-0 send: auth
cram-md5 challenge <2043907137.1547047853@am-pdc-sd> ssl=0
09-Jan-2019 13:30:53 am-pdc-sd: cram-md5.c:133-0 cram-get
received: auth cram-md5 <1199549957.1547047853@am-pdc-dir>
ssl=0
09-Jan-2019 13:30:53 am-pdc-sd: cram-md5.c:157-0 sending resp to
challenge: 9U+ADnlRBG5TBWd4T8YocA
09-Jan-2019 13:30:53 am-pdc-sd: dircmd.c:214-0 Message channel
init completed.
09-Jan-2019 13:30:53 am-pdc-sd: dircmd.c:1167-0 Found device
wasabi_s3_1
09-Jan-2019 13:30:53 am-pdc-sd: dircmd.c:1211-0 Found device
wasabi_s3_1
09-Jan-2019 13:30:53 am-pdc-sd: acquire.c:663-0 Attach 0xc004148
to dev "wasabi_s3_1" (/backup/bacula/cloud)
Bacula interrupted by signal 11: Segmentation violation
Kaboom! bacula-sd, am-pdc-sd got signal 11 - Segmentation
violation at 09-Jan-2019 13:30:53. Attempting traceback.
Kaboom! exepath=/opt/bacula/bin/
Calli

Re: [Bacula-users] Temporarily place autochanger tape drive in service mode?

2019-01-19 Thread Kern Sibbald

Hello Patti,

Well I am not sure it is beyond ridiculous, but it is certainly not 
correct :-(


I will see if I can fix it.

Kern

On 1/4/19 4:42 PM, Clark, Patti via Bacula-users wrote:

Thank you to everyone that replied.  The disable storage command did what was 
needed.  One issue with using it is the obnoxious noise produced by the 
director in every job that was initially assigned the disabled drive.  Kern, 
this is beyond ridiculous.
.
.
.
2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning:
  Device "adminChanger" requested by DIR is disabled.
2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning:
  Device "adminChanger" requested by DIR is disabled.
2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning:
  Device "adminChanger" requested by DIR is disabled.
.
.
.
And it goes on for pages and pages until the job is assigned an available drive.

Patti
  


On 1/3/19, 1:30 PM, "Bill Arlofski"  wrote:

 On 01/02/2019 02:45 PM, Clark, Patti via Bacula-users wrote:
 > Is there a way to put a malfunctioning tape drive in an autochanger into 
a
 > service mode via commands without modifying bacula configuration files?
 >
 >
 >
 > */Patti Clark/*
 
 Hi Patti,
 
 There is an enable/disable command to do this:
 
 * disable storage= drive=
 
 
 A status storage will show this drive disabled by "User command":

 8<
 Device File: "speedy_drv_0" (/path/to/device/0) is not open.
Device is disabled. User command.
Drive 0 is not loaded.
Available Space= GB
 8<
 
 
 To enable it again:
 
 * enable storage= drive=
 
 I do not believe the disable is permanent.. ie: it will not survive an SD

 restart, but it should help in your situation.
 
 Hope this help.
 
 
 Best regards,
 
 Bill
 
 
 --

 Bill Arlofski
 http://www.revpol.com/bacula
 -- Not responsible for anything below this line --
 
 
 ___

 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate jobs: Major data loss risk (Bug, unfixed)

2019-01-19 Thread Kern Sibbald

Hello,

In general, if the definitions of the jobs are not available, Bacula 
will not have the
information it needs to perform a migration (Next Pool, ...).  I agree 
with what
Josh says -- if you want to access a Job for any reason, it is better to 
keep the

Job definition and possibly disable it or have no schedule.

That said, it appears that a bug was overlooked by Bacula Systems (me 
included),
in that if the Job definition is not available, the Migration job should 
be failed, and

it was not.

There is an Enterprise fix for this oversight that I will apply to the 
community version.


I will also update the documentation to mention this point.

Best regards,
Kern

On 1/14/19 12:49 PM, Alan Brown wrote:

If you are using Bacula for any form of archival work, or migrating OLD
backups, then you need to be aware of of this issue.


The Migrate feature only migrates jobs in a volume that are in the
configuration file.


What this means is that if you have jobs old jobs that are no longer
backed up and have been removed from the configuration, but still exist
on archive media then you will LOSE those jobs when you migrate them to
new media.


This is critically important to be aware of for instance if you are
moving your archive volumes from older to newer version of LTO tapes.

Just because jobs exist in the database and can be restored, does NOT
mean they will be migrated.


To make matters worse, the migrate job will FAIL, but the job will then
be falsely tagged as migrated.


I raised this with Baculasystems over a year ago (we have Enterprise
edition) but the developers don't consider this to be a bug and won't
fix it.






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users