Re: [Bacula-users] bacula catalog inconsistency

2015-07-30 Thread Lukas Hejtmanek
On Tue, Jul 28, 2015 at 06:59:29PM -0300, Ana Emília M. Arruda wrote:
> I´m sorry, I was talking about JobId 1557 and not the 1584. I think the
> problem here is not the number of files, but the VolBytes=64,512. For
> Bacula, your volume has no backup data. Have you checked the information
> for this volume before the bscan -s -m operation? Did it appear in catalog?

+---+--+-+--+---+--+---+---+
| JobId | Name | StartTime   | Type | Level | JobFiles 
| JobBytes  | JobStatus |
+---+--+-+--+---+--+---+---+
| 1,557 | BackupClientBay2-ics | 2015-07-15 23:05:01 | B| I |  590 
| 3,763,883,328 | T |
+---+--+-+--+---+--+---+---+

well, that volume bytes are low because I accidentaly truncated it after purge. 
So
I cannot provide further information beside those I already provided. The
problem arised *before* I purged&truncated that volume. 

Maybe I can only postpone this problem until notice it again to be able to
provide more information?

-- 
Lukáš Hejtmánek

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Browsing through Bacula made backup files

2015-07-30 Thread Radosław Korzeniewski
Hello,

2015-07-27 12:59 GMT+02:00 Luc A :

> Dear reader,
>
> Is there a way to browse through an Bacula made backup files like browsing
> throught a RAR file with WinRAR??
>
>
Yes. Use bls for that. When you need to extract something you can use
bextract for that.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Browsing through Bacula made backup files

2015-07-30 Thread Josip Deanovic
On Monday 2015-07-27 12:59:23 Luc A wrote:
> Dear reader,
> 
> Is there a way to browse through an Bacula made backup files like
> browsing throught a RAR file with WinRAR??
> 
> Best regards,
> 
> Luc

Hi

Others already gave you the answer mentioning the bls and brestore tools.
I would like to add that in order to extract the files without the
bacula database, using the bextract tool, you will also have to create a
bootstrap file.

There is a relatively good example for creating bootstrap (bsr) file on
this blog:
https://pipposan.wordpress.com/2010/06/09/bacula-tape-restore-without-database/

The example there is working with tapes but it is easy to adapt it for
file storage (the storage device parameter would be replaced by the
directory holding volume files).

Also, make sure that your job is not spanning through more than one
volume. If it does you will have append all the related volumes into
the bootstrap file.


-- 
Josip Deanovic

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula catalog inconsistency

2015-07-30 Thread Ana Emília M . Arruda
Hello Lukas,

I found something here, but I really don't think that they are catalog
inconsistencies.
I misunderstood the 64,512 bytes, you're not dealing with tapes, but with
disk volumes. So this is OK. You have data there. Also, you would not be
able to bls the files if they were not there.
Your list volumes shows a 180 days volume retention and not 100 days as you
said. Maybe you need to update existent volumes in this pool for your pool
definition in your conf files to be reflected in catalog.
When you purge a volume, all the job and file information from this job is
also purged from catalog. Your backup data will be preserved in the volume
until it is reused by Bacula for another backup. If the volume is not
reused and you do a bscan to retrieve Media/Pool/Job/File into catalog,
Bacula will set another jobid for the original jobid found in the volume,
if this was the first volume used for your backup (supposing your backup
job spanned into various volumes):

bscan: bscan.c:1139 Created new JobId=161 record for original JobId=160

Maybe it could be a good idea to check your pool definitions in your conf
files with the definitions for the existent volumes in your pools.

Best regards,
Ana


On Thu, Jul 30, 2015 at 4:29 AM, Lukas Hejtmanek 
wrote:

> On Tue, Jul 28, 2015 at 06:59:29PM -0300, Ana Emília M. Arruda wrote:
> > I´m sorry, I was talking about JobId 1557 and not the 1584. I think the
> > problem here is not the number of files, but the VolBytes=64,512. For
> > Bacula, your volume has no backup data. Have you checked the information
> > for this volume before the bscan -s -m operation? Did it appear in
> catalog?
>
>
> +---+--+-+--+---+--+---+---+
> | JobId | Name | StartTime   | Type | Level |
> JobFiles | JobBytes  | JobStatus |
>
> +---+--+-+--+---+--+---+---+
> | 1,557 | BackupClientBay2-ics | 2015-07-15 23:05:01 | B| I |
> 590 | 3,763,883,328 | T |
>
> +---+--+-+--+---+--+---+---+
>
> well, that volume bytes are low because I accidentaly truncated it after
> purge. So
> I cannot provide further information beside those I already provided. The
> problem arised *before* I purged&truncated that volume.
>
> Maybe I can only postpone this problem until notice it again to be able to
> provide more information?
>
> --
> Lukáš Hejtmánek
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup job failing on Debian client

2015-07-30 Thread Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
Wanderlei,

I enabled debug on the Debian (sparc) client and manually tried to run a full 
backup of this client. When it is just about to start spooling the bacula-fd 
crashes on the client. Here is the debug output:

***-fd: job.c:288-0 Executing JobId= command.
***-fd: job.c:1733-0 set sd auth key
***-fd: job.c:546-0 JobId=0 Auth=dummy
***-fd: fd_plugins.c:1033-0 plugin list is NULL
***-fd: job.c:272-0  Accurate= BaseJob=
***-fd: find.c:215-0 F /
***-fd: find_one.c:374-0 File : /
***-fd: htable.c:79-0 malloc buf=f5956020 size=262144 rem=262132
***-fd: htable.c:198-0 Allocated big buffer of 262144 bytes
***-fd: htable.c:140-0 Leave hash_index hash=0xd index=46
***-fd: htable.c:349-0 Insert: hash=0 index=13
***-fd: htable.c:352-0 Insert hp=f595602c index=46 item=f595602c offset=0
Bacula interrupted by signal 10: BUS error
Kaboom! bacula-fd, ***-fd got signal 10 - BUS error. Attempting traceback.
Kaboom! exepath=/usr/sbin/
***-fd: signal.c:197-0 Working=/opt/bacula/var
***-fd: signal.c:198-0 btpath=/usr/sbin/btraceback
***-fd: signal.c:199-0 exepath=/usr/sbin/bacula-fd
***-fd: signal.c:228-0 Doing waitpid
Calling: /usr/sbin/btraceback /usr/sbin/bacula-fd 16733 /opt/bacula/var

I see the “BUS error” in the above output. I am not sure if this is the 
problem?  Thank you.

Uthra


From: Wanderlei Huttel [mailto:wanderleihut...@gmail.com]
Sent: Tuesday, July 28, 2015 4:09 PM
To: Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] backup job failing on Debian client

Hi Uthra

Another thing that you would try is enable debug mode and do a backup:
http://www.bacula.org/5.0.x-manuals/en/problems/problems/What_Do_When_Bacula.html

Getting Debug Output from Bacula

Each of the daemons normally has debug compiled into the program, but disabled. 
There are two ways to enable the debug output. One is to add the -d nnn option 
on the command line when starting the debugger. The nnn is the debug level, and 
generally anything between 50 and 200 is reasonable. The higher the number, the 
more output is produced. The output is written to standard output.
The second way of getting debug output is to dynamically turn it on using the 
Console using the setdebug command. The full syntax of the command is:

 setdebug level=nnn client=client-name storage=storage-name dir

If none of the options are given, the command will prompt you. You can 
selectively turn on/off debugging in any or all the daemons (i.e. it is not 
necessary to specify all the components of the above command).


Best Regards
Wanderlei

2015-07-28 16:07 GMT-03:00 Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] 
mailto:uthra.r@nasa.gov>>:
Iptables? Is there any way you are using a 32bits client on a 64 environment?

- My bacula server is RHEL 6 (64-bit) and the client is Debian O.S. (64-bit 
sparc)

I opened port 9102 on the client.

Thank you.
Uthra


From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: Tuesday, July 28, 2015 11:50 AM
To: Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
Cc: Ana Emília M. Arruda; 
bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] backup job failing on Debian client


Hello Uthra: do you have selinux enabled on the storage daemon side (RHEL)? 
Does your connection go through a firewall or maybe a nasty router?
Depending on your answers you may try to use the "Heartbeat Interval" option in 
the affected daemons.

Hello Heitor,

SELINUX is not enabled on the bacula server. All the other client backups 
configured on this bacula server is working fine. I tried adding the “Heartbeat 
Interval 60” in the client–dir and client–fd files. The backup of the Debian 
client is still failing:

***Fatal error: Network error with FD during Backup: ERR=No data available 
28-Jul 11:28 lindy-sd JobId 18819: Fatal error: append.c:160 Error reading data 
header from FD. ERR=No data available
Iptables? Is there any way you are using a 32bits client on a 64 environment?

Regards,
===
Heitor Medrado de Faria - LPIC-III | ITIL-F |  Bacula Systems Certified 
Administrator II
Do you need Bacula training? 
https://www.udemy.com/bacula-backup-software/?couponCode=bacula-list
+55 61  8268-4220
Site: http://bacula.us FB: heitor.faria
===
Thank you.
Uthra

From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: Tuesday, July 28, 2015 10:03 AM
To: Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
Cc: Ana Emília M. Arruda; 
bacula-users@lists.sourceforge.net

Subject: Re: [Bacula-users] backup job failing on Debian client
Hi Ana,

I had just cut and pasted the relevant part of the log. The fileset is -> “File 
= /” (it has t

Re: [Bacula-users] backup job failing on Debian client

2015-07-30 Thread Josh Fisher


On 7/30/2015 9:37 AM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:


Wanderlei,

I enabled debug on the Debian (sparc) client and manually tried to run 
a full backup of this client. When it is just about to start spooling 
the bacula-fd crashes on the client. Here is the debug output:


***-fd: job.c:288-0 Executing JobId= command.

***-fd: job.c:1733-0 set sd auth key

***-fd: job.c:546-0 JobId=0 Auth=dummy

***-fd: fd_plugins.c:1033-0 plugin list is NULL

***-fd: job.c:272-0 status command.


***-fd: runscript.c:108-0 runscript: running all RUNSCRIPT object 
(ClientAfterJob) JobStatus=C


***-fd: pythonlib.c:225-0 No startup module.

***-fd: job.c:401-0 Calling term_find_files

***-fd: job.c:406-0 Done with term_find_files

***-fd: runscript.c:286-0 runscript: freeing all RUNSCRIPTS object

***-fd: job.c:408-0 Done with free_jcr

***-fd: mem_pool.c:375-0 garbage collect memory pool

***-fd: job.c:2377-0 3000 OK data

***-fd: pythonlib.c:225-0 No startup module.

***-fd: job.c:1936-0 begin blast ff=5ac60

***-fd: backup.c:89-0 bfiled: opened data connection 6 to stored

***-fd: find.c:94-0 Enter set_find_options()

***-fd: find.c:97-0 Leave set_find_options()

***-fd: find.c:211-0 Verify= Accurate= 
BaseJob=


***-fd: find.c:215-0 F /

***-fd: find_one.c:374-0 File : /

***-fd: htable.c:79-0 malloc buf=f5956020 size=262144 rem=262132

***-fd: htable.c:198-0 Allocated big buffer of 262144 bytes

***-fd: htable.c:140-0 Leave hash_index hash=0xd index=46

***-fd: htable.c:349-0 Insert: hash=0 index=13

***-fd: htable.c:352-0 Insert hp=f595602c index=46 item=f595602c 
offset=0


Bacula interrupted by signal 10: BUS error

Kaboom! bacula-fd, ***-fd got signal 10 - BUS error. Attempting 
traceback.


Kaboom! exepath=/usr/sbin/

***-fd: signal.c:197-0 Working=/opt/bacula/var

***-fd: signal.c:198-0 btpath=/usr/sbin/btraceback

***-fd: signal.c:199-0 exepath=/usr/sbin/bacula-fd

***-fd: signal.c:228-0 Doing waitpid

Calling: /usr/sbin/btraceback /usr/sbin/bacula-fd 16733 /opt/bacula/var

I see the “BUS error” in the above output. I am not sure if this is 
the problem?  Thank you.





A SIGBUS signal basically means that the code is asking the CPU to do 
something that is impossible, such as attempting to access a 64-bit 
integer value at a virtual memory address that is not aligned properly. 
Are you running a 32-bit bacula-fd?



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup job failing on Debian client

2015-07-30 Thread Kern Sibbald

  
  
Some times debug output creates bus
  errors on architectures such as Sparc for two reasons.
  1. They are debug statements so they are not always carefully
  written nor checked.
  
  2. We do not usually run on Sparc machines.  
  
  If you are getting a bus error, we will need a traceback then we
  can fix the problem.
  
  Best regards,
  Kern
  
  On 30.07.2015 15:37, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
  wrote:


  
  
  
  
Wanderlei,
 
I
enabled debug on the Debian (sparc) client and manually
tried to run a full backup of this client. When it is just
about to start spooling the bacula-fd crashes on the client.
Here is the debug output:
 
***-fd:
job.c:288-0 Executing JobId= command.
***-fd:
job.c:1733-0 set sd auth key
***-fd:
job.c:546-0 JobId=0 Auth=dummy
***-fd:
fd_plugins.c:1033-0 plugin list is NULL
***-fd:
job.c:272-0 
***-fd:
runscript.c:108-0 runscript: running all RUNSCRIPT object
(ClientAfterJob) JobStatus=C
***-fd:
pythonlib.c:225-0 No startup module.
***-fd:
job.c:401-0 Calling term_find_files
***-fd:
job.c:406-0 Done with term_find_files
***-fd:
runscript.c:286-0 runscript: freeing all RUNSCRIPTS object
***-fd:
job.c:408-0 Done with free_jcr
***-fd:
mem_pool.c:375-0 garbage collect memory pool
***-fd:
job.c:2377-0 3000 OK data
***-fd:
pythonlib.c:225-0 No startup module.
***-fd:
job.c:1936-0 begin blast ff=5ac60
***-fd:
backup.c:89-0 bfiled: opened data connection 6 to stored
***-fd:
find.c:94-0 Enter set_find_options()
***-fd:
find.c:97-0 Leave set_find_options()
***-fd:
find.c:211-0 Verify= Accurate=
BaseJob=
***-fd:
find.c:215-0 F /
***-fd:
find_one.c:374-0 File : /
***-fd:
htable.c:79-0 malloc buf=f5956020 size=262144 rem=262132
***-fd:
htable.c:198-0 Allocated big buffer of 262144 bytes
***-fd:
htable.c:140-0 Leave hash_index hash=0xd index=46
***-fd:
htable.c:349-0 Insert: hash=0 index=13
***-fd:
htable.c:352-0 Insert hp=f595602c index=46 item=f595602c
offset=0
Bacula
interrupted by signal 10: BUS error
Kaboom!
bacula-fd, ***-fd got signal 10 - BUS error. Attempting
traceback.
Kaboom!
exepath=/usr/sbin/
***-fd:
signal.c:197-0 Working=/opt/bacula/var
***-fd:
signal.c:198-0 btpath=/usr/sbin/btraceback
***-fd:
signal.c:199-0 exepath=/usr/sbin/bacula-fd
***-fd:
signal.c:228-0 Doing waitpid
Calling:
/usr/sbin/btraceback /usr/sbin/bacula-fd 16733
/opt/bacula/var
 
I
see the “BUS error” in the above output. I am not sure if
this is the problem?  Thank you.
 
Uthra
 
 
From:
Wanderlei Huttel [mailto:wanderleihut...@gmail.com]

Sent: Tuesday, July 28, 2015 4:09 PM
To: Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] backup job failing on
Debian client
 

  Hi Uthra
  
 
  
  
Another thing that you would try is
  enable debug mode and do a backup:
  
  
http://www.bacula.org/5.0.x-manuals/en/problems/problems/What_Do_When_Bacula.html
  
  
 
  
  
Getting Debug Output from Bacula
  
  

   


  Each of the daemons normally has
debug compiled into the program, but disabled. There are
two ways to enable the debug output. One is to add the
-d nnn option on the command line when starting the
debugger. The nnn is the debug level, and generally
anything between 50 and 200 is reasonable. The higher
the number, the more output is produced. The output is
written to standard output.


  The second way of getting debug
output is to dynamically tur

Re: [Bacula-users] Browsing through Bacula made backup files

2015-07-30 Thread Radosław Korzeniewski
Hello,

2015-07-30 11:54 GMT+02:00 Josip Deanovic :

> I would like to add that in order to extract the files without the
> bacula database, using the bextract tool, you will also have to create a
> bootstrap file.
>

Well. It is not required. It is possible to use bextract without a bsr
file.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy tape job for offsite - best practices

2015-07-30 Thread Bill Arlofski
On 07/29/2015 05:16 PM, RAT wrote:
> I'm not (easily) finding a way to copy weekend full backups to a 2nd tape for
> offsite.  I've seen some pretty elaborate SQL programs.  Isn't there an easier
> way?


Hi Robert,

I can't officially call this a "best practices" response, but one thing you
might want to try to avoid dealing with complex SQL queries for copy jobs is 
this:

- Set up a pool where your weekend jobs are written to.

- In that weekend pool make sure to set the "NextPool =" directive to point to
a pool that your offsite tape drive can use.

- Set up a copy job with the "Pool =" directive pointing to that weekend pool
("Pool = " directive in a copy job refers to the originating pool).

- Set the copy job's SelectionType = PoolUncopiedJobs

- Schedule the copy job to run after all weekend jobs are expected to be done
and/or set its priority lower so that even if it is scheduled for the same
time as the rest, it will be queued and will not start until the backup jobs
are finished.


Now there are no "unsightly SQL queries" in your copy job's configuration, and
your weekend jobs are easily identified by the copy job based on the fact that
they are in one pool.

Once a job has been copied, it will not be copied again if you use the
"PoolUncopiedJobs" SelectionType, so this is a nice shortcut.

Now you might need this to be a little more complex if you have (or require)
multiple weekend pools. For example, you may have Full, Inc, and Diff weekend
pools, or weekend pools based on client, or type of job (web servers, windows
servers, db servers etc)  In which case you just need to build on the basic
example and create those multiple weekend pools, and multiple Copy jobs to
address each pool.


Hope this helps!

Bill




-- 
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Browsing through Bacula made backup files

2015-07-30 Thread Josip Deanovic
On Thursday 2015-07-30 21:51:11 Radosław Korzeniewski wrote:
> Hello,
> 
> 2015-07-30 11:54 GMT+02:00 Josip Deanovic :
> > I would like to add that in order to extract the files without the
> > bacula database, using the bextract tool, you will also have to create
> > a bootstrap file.
> 
> Well. It is not required. It is possible to use bextract without a bsr
> file.

It is possible but without a bootstrap file you wouldn't be able to
control what job or a number of jobs gets extracted.
If you have multiple backup jobs from multiple systems contained inside
the same backup volume (which is common scenario) the restore without a
bsr file would make a mess in the specified output directory.

I might be wrong because I didn't try it but it seems to me that this is
exactly what would happen (unless you are using one job per volume which
is possible but rare practice).

-- 
Josip Deanovic

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy tape job for offsite - best practices

2015-07-30 Thread Josip Deanovic
On Thursday 2015-07-30 16:11:55 Bill Arlofski wrote:
> On 07/29/2015 05:16 PM, RAT wrote:
> > I'm not (easily) finding a way to copy weekend full backups to a 2nd
> > tape for offsite.  I've seen some pretty elaborate SQL programs. 
> > Isn't there an easier way?
> 
> Hi Robert,
> 
> I can't officially call this a "best practices" response, but one thing
> you might want to try to avoid dealing with complex SQL queries for
> copy jobs is this:
> 
> - Set up a pool where your weekend jobs are written to.
> 
> - In that weekend pool make sure to set the "NextPool =" directive to
> point to a pool that your offsite tape drive can use.
> 
> - Set up a copy job with the "Pool =" directive pointing to that weekend
> pool ("Pool = " directive in a copy job refers to the originating
> pool).
> 
> - Set the copy job's SelectionType = PoolUncopiedJobs
> 
> - Schedule the copy job to run after all weekend jobs are expected to be
> done and/or set its priority lower so that even if it is scheduled for
> the same time as the rest, it will be queued and will not start until
> the backup jobs are finished.
> 
> 
> Now there are no "unsightly SQL queries" in your copy job's
> configuration, and your weekend jobs are easily identified by the copy
> job based on the fact that they are in one pool.
> 
> Once a job has been copied, it will not be copied again if you use the
> "PoolUncopiedJobs" SelectionType, so this is a nice shortcut.
> 
> Now you might need this to be a little more complex if you have (or
> require) multiple weekend pools. For example, you may have Full, Inc,
> and Diff weekend pools, or weekend pools based on client, or type of
> job (web servers, windows servers, db servers etc)  In which case you
> just need to build on the basic example and create those multiple
> weekend pools, and multiple Copy jobs to address each pool.


I have played with the  "SelectionType = PoolUncopiedJobs" option some
time ago and have found it problematic because the first time you use it
it will copy every job existing in the database which is not yet copied.

Since I wanted to start with copying only the latest full and related
differential and incremental jobs I have opted for writing an SQL query.
I have found an SQL query somewhere in the bacula source and I have used
it as a starting point for an improved SQL query which matches my
requirements.

I have came up with the SQL query which matches only the latest successful
full and all the related successful differential and incremental jobs
which haven't already being copied. I am using it to copy jobs to a remote
location.

If someone needs it I can post the SQL query here.

-- 
Josip Deanovic

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users