Re: [Bacula-users] Pool file/job retention not updating

2010-05-28 Thread Machiel van Veen
On Thursday 27 May 2010 08:36:05 pm Martin Simmons wrote:
  On Thu, 27 May 2010 14:40:13 +0200, Machiel van Veen said:
 
  But when I do show pool=Default in bconsole I get:
 
  JobRetention=0 secs FileRetention=15 years 4 months 1 day 11 hours 46
  mins 16 secs
 
 It is a bug in the show pool command in the current releases.  I don't know
  if it will be fixed in the next release because the latest source is no
  longer available.
 
 __Martin

Thanks for your reply, however that does mean the having the file and job 
retention in the pool directive does not seem to function as I 
hoped/understood.

I have a setup running two jobs for one client. Both jobs have their own pool 
with their own retention times. But when the job with the shortest retention 
time runs it also prunes jobs and file records from the other job with the 
longer retention times set.

Is it correct I need to create two clients with their own configurations in 
order to have two jobs with diferent retention times? Pruning is done based on 
the client, not the job and/or pool?

Thank you, best regards,

Machiel.

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backup failed, network problems, vchanger or bacula?

2010-05-28 Thread C . Keschnat
Today our backups failed. It seemed to run fine until some point, then it 
stopped with a changer error. There was also this error:
ptrace: Operation not permitted.
/home/bacula/working/19031: No such file or directory.
$1 = 0
/home/bacula/etc/btraceback.gdb:2: Error in sourced command file:
No symbol exename in current context.

But I don't know if that is related.

All the Joboutputs have these lines in them:
28-Mai 02:53 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? drive 
0 command.
28-Mai 02:53 bacula-sd JobId 9147: 3991 Bad autochanger loaded? drive 0 
command: ERR=Child exited with code 1.
Results=Could not write loaded0 file on /mnt/usbchanger1/mag

The first job wrote some data and also had this lines in the output:
28-Mai 02:53 iq-fd JobId 9147: Fatal error: backup.c:944 Network send 
error to SD. ERR=Connection reset by peer

Seems like there was a problem with the network. The external HDD is 
mounted through iSCSI on the server. When I came to work this morning 
though I could touch and delete files without problems so the iSCSI 
connection was still there. Could small network problems cause 
vchanger/bacula to stop working? I'd like to find out if the problem is 
bacula or vchanger related. I know that probably noone is using vchanger 
with iSCSI but I hope someone might have an idea on how to solve this (it 
is not the first time we had this kind of problem).
This is bacula 5.0.1 (clients 3.0.0) and vchanger 0.8.2.

Here is the full output of the first job that ran:

28-Mai 02:05 bacula-dir JobId 9147: Start Backup JobId 9147, 
Job=iq.2010-05-28_02.05.00_47
28-Mai 02:05 bacula-dir JobId 9147: Using Device usb-changer-1-drive-0
28-Mai 02:05 iq-fd JobId 9147: shell command: run ClientBeforeJob 
/etc/init.d/rc_domino stop
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Switching to notes
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Stopping Domino for xLinux 
(notes)
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting for shutdown 
to complete
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 10 seconds
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 20 seconds
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 30 seconds
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 40 seconds
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 50 seconds
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: ..done
28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Domino for xLinux (notes) 
shutdown completed
28-Mai 02:05 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 
15, drive 0 command.
28-Mai 02:05 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 16, 
drive 0 command.
28-Mai 02:05 bacula-sd JobId 9147: 3305 Autochanger load slot 16, drive 
0, status is OK.
28-Mai 02:05 bacula-sd JobId 9147: Volume usbchanger1_0005_0016 
previously written, moving to end of data.
28-Mai 02:05 bacula-sd JobId 9147: Ready to append to end of Volume 
usbchanger1_0005_0016 size=2781850256
28-Mai 02:12 bacula-sd JobId 9147: User defined maximum volume capacity 
4,831,838,208 exceeded on device usb-changer-1-drive-0 
(/home/bacula/working/usbchanger1/0/drive0).
28-Mai 02:12 bacula-sd JobId 9147: End of medium on Volume 
usbchanger1_0005_0016 Bytes=4,831,783,484 Blocks=74,900 at 28-Mai-2010 
02:12.
28-Mai 02:12 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 
16, drive 0 command.
28-Mai 02:12 bacula-dir JobId 9147: There are no more Jobs associated with 
Volume usbchanger1_0005_0032. Marking it purged.
28-Mai 02:12 bacula-dir JobId 9147: All records pruned from Volume 
usbchanger1_0005_0032; marking it Purged
28-Mai 02:12 bacula-dir JobId 9147: Recycled volume 
usbchanger1_0005_0032
28-Mai 02:12 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? drive 
0 command.
28-Mai 02:12 bacula-sd JobId 9147: 3302 Autochanger loaded? drive 0, 
result: nothing loaded.
28-Mai 02:12 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 32, 
drive 0 command.
28-Mai 02:12 bacula-sd JobId 9147: 3305 Autochanger load slot 32, drive 
0, status is OK.
28-Mai 02:12 bacula-sd JobId 9147: Recycled volume usbchanger1_0005_0032 
on device usb-changer-1-drive-0 
(/home/bacula/working/usbchanger1/0/drive0), all previous data lost.
28-Mai 02:12 bacula-sd JobId 9147: New volume usbchanger1_0005_0032 
mounted on device usb-changer-1-drive-0 
(/home/bacula/working/usbchanger1/0/drive0) at 28-Mai-2010 02:12.
28-Mai 02:27 bacula-sd JobId 9147: User defined maximum volume capacity 
4,831,838,208 exceeded on device usb-changer-1-drive-0 
(/home/bacula/working/usbchanger1/0/drive0).
28-Mai 02:27 bacula-sd JobId 9147: End of medium on Volume 
usbchanger1_0005_0032 Bytes=4,831,819,893 Blocks=74,898 at 28-Mai-2010 
02:27.
28-Mai 02:27 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 
32, drive 0 command.
28-Mai 02:27 bacula-dir JobId 9147: There are no more Jobs associated with 
Volume usbchanger1_0005_0033. Marking it purged.
28-Mai 02:27 bacula-dir 

Re: [Bacula-users] resurrecting an FC11 install - cann ot connect to postgresql

2010-05-28 Thread Gary Stainburn
On Thursday 27 May 2010 19:54:16 Martin Simmons wrote:
  On Thu, 27 May 2010 14:56:19 +0100, Gary Stainburn said:
 
  Catalog {
Name = MyCatalog
DB Address = '127.0.0.1'; dbname = bacula; user = bacula; password =
  * }

 This should be DB Address = 127.0.0.1 i.e. double-quotes instead of
 single-quotes.

 __Martin

Cheers Martin.  That sorted it


-- 
Gary Stainburn
 
This email does not contain private or confidential material as it
may be snooped on by interested government parties for unknown
and undisclosed purposes - Regulation of Investigatory Powers Act, 2000 

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Simone Martina
Hi at all,
someone of my colleagues tends to save non-work files (like large avi 
file) in shared directory and so my bacula backup Job take a lot of time 
due to save these unuseful rubbish... I would like to find full path of 
something file name contaings avi or AVI, has bconsole a sort of command 
for these type of query, or should I do a query directly to mysql?

Thanks for your soggestions,

Simone

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Graham Keeling
On Fri, May 28, 2010 at 10:47:00AM +0200, Simone Martina wrote:
 Hi at all,
 someone of my colleagues tends to save non-work files (like large avi 
 file) in shared directory and so my bacula backup Job take a lot of time 
 due to save these unuseful rubbish... I would like to find full path of 
 something file name contaings avi or AVI, has bconsole a sort of command 
 for these type of query, or should I do a query directly to mysql?

I had to do something similar yesterday. I don't know how to do it with
bconsole.

I ended up running something like this...

SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
(p.Path LIKE '%avi%' OR fn.Name LIKE '%avi%');

...or maybe this...

SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
fn.Name LIKE '%.avi';

You have to add more table joins to get the job and client out.


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Graham Keeling
On Fri, May 28, 2010 at 10:12:51AM +0100, Graham Keeling wrote:
 On Fri, May 28, 2010 at 10:47:00AM +0200, Simone Martina wrote:
  Hi at all,
  someone of my colleagues tends to save non-work files (like large avi 
  file) in shared directory and so my bacula backup Job take a lot of time 
  due to save these unuseful rubbish... I would like to find full path of 
  something file name contaings avi or AVI, has bconsole a sort of command 
  for these type of query, or should I do a query directly to mysql?
 
 I had to do something similar yesterday. I don't know how to do it with
 bconsole.
 
 I ended up running something like this...
 
 SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 (p.Path LIKE '%avi%' OR fn.Name LIKE '%avi%');
 
 ...or maybe this...
 
 SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 fn.Name LIKE '%.avi';
 
 You have to add more table joins to get the job and client out.

...actually, the JobId is in the File table, so that is easy:

SELECT DISTINCT f.JobId, p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 fn.Name LIKE '%.avi';


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Simone Martina
Thanks for your suggestions, I propose a query made with help of a 
friend of mine:
SELECT CONCAT(Path.Path,Filename.Name) FROM Filename Inner Join File ON 
Filename.FilenameId = File.FilenameId Inner Join Path ON File.PathId = 
Path.PathId where (Filename.Name LIKE %avi) OR (Filename.Name LIKE 
%AVI) OR (Filename.Name LIKE %mpeg) OR (Filename.Name LIKE %MPEG) 
OR (Filename.Name LIKE %mpg) OR (Filename.Name LIKE %MPG);

Now, I bring up the reasoner (http://wggw.bellobello.it/macek.jpg) and 
will go to talk to some of my collegues :-P.

Simone

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.2 src rpm dependency bug?

2010-05-28 Thread Martin Simmons
 On Thu, 27 May 2010 19:14:27 +0100, Alan Brown said:
 
 Martin Simmons wrote:
 
  Why is there a postgres dependency in the client package?
  
  Is the dependency definitely coming from the client package and not
  bacula-libs?
 
 It could be, but the client package depends on the libs package (why?)

All of the bacula daemons and tools use code that is in the libs.  In the
default build, the libs are .so files so it makes sense to put them into a
separate package.


  There is a currently an interesting misfeature that makes bacula-libs depend
  on the database because it contains libbacsql.
 
 IMO The client package needs to be as slimmed down as possible. I don't 
 even think that bconsole should be in there.

Yes, maybe bconsole should be in its own package.

__Martin

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mail error: bsmtp cannot find libbac-5.0.1.so after upgrade from 3.0.3 to 5.0.2

2010-05-28 Thread Beck J Mr
I had the same problem here. Be aware also that when, if you upgrade
your database software and reconfigure, make and install Bacula (as is
strongly recommended in the documentation) the problem reoccurs.

Thanks
James


-Original Message-
From: Alex Chekholko [mailto:ch...@genomics.upenn.edu] 
Sent: 27 May 2010 15:00
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Mail error: bsmtp cannot find
libbac-5.0.1.so after upgrade from 3.0.3 to 5.0.2

On Wed, 12 May 2010 13:01:51 +0200
Foo bfo...@yahoo.co.uk wrote:

 Just a FYI if someone else runs into this:
 
 after upgrading from 2.4.3 to 3.0.3 and then to 5.0.2 I'm getting
 (ironically) the following mail from logcheck:
 
 May 12 11:45:10 HOSTNAME bacula-dir: 12-May 11:45  Message delivery
ERROR:  
 Mail prog: /sbin/bsmtp: error while loading shared libraries:  
 libbac-5.0.1.so: cannot open shared object file: No such file or 
 directory
 
 May 12 11:45:10 HOSTNAME bacula-dir: 12-May 11:45  Message delivery
ERROR:  
 Mail program terminated in error. CMD=/sbin/bsmtp -h mailgw -f
'Bacula  
 bacula-...@domain' -s Bacula: Backup OK of OTHERHOST-fd
Incremental  
 f...@domain ERR=Child exited with code 127
 
 But it does exist:
 -rwxr-x--- 1 root root 1010330 2010-05-12 11:11 
 /usr/lib/libbac-5.0.1.so
 
 
 The fix was to do a chown root:bacula on /usr/lib/libbac*
 
 The bacula group has the bacula-dir and -sd users in it here, as well 
 as some people who need to run bconsole (so executables like 
 /sbin/bconsole are all also root:bacula)
 
 Not sure if this is handled correctly by packaging, I used source for 
 all versions so far.
 

I just want to confirm that I had the same problem (or very similar).
I built 5.0.2 from the src rpm on CentOS 5.4

11-May 17:08 bac-dir JobId 10262: shell command: run
BeforeJob /opt/bacula/scripts/make_catalog_backup.pl MyCatalog 11-May
17:08 bac-dir JobId 10262:
BeforeJob: /opt/bacula/bin/dbcheck: error while loading shared
libraries: libbacsql-5.0.1.so: cannot open shared object file: No such
file or directory

The fix was as Foo suggested (my libs are at a different path):
# chown root:bacula /opt/bacula/lib64/libbac*

Regards,
-- 
Alex Chekholko   ch...@genomics.upenn.edu  


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failed, network problems, vchanger or bacula?

2010-05-28 Thread John Drescher
2010/5/28  c.kesch...@internet-mit-iq.de:
 Today our backups failed. It seemed to run fine until some point, then it
 stopped with a changer error. There was also this error:
 ptrace: Operation not permitted.
 /home/bacula/working/19031: No such file or directory.
 $1 = 0
 /home/bacula/etc/btraceback.gdb:2: Error in sourced command file:
 No symbol exename in current context.

 But I don't know if that is related.

 All the Joboutputs have these lines in them:
 28-Mai 02:53 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? drive
 0 command.
 28-Mai 02:53 bacula-sd JobId 9147: 3991 Bad autochanger loaded? drive 0
 command: ERR=Child exited with code 1.
 Results=Could not write loaded0 file on /mnt/usbchanger1/mag

 The first job wrote some data and also had this lines in the output:
 28-Mai 02:53 iq-fd JobId 9147: Fatal error: backup.c:944 Network send error
 to SD. ERR=Connection reset by peer

 Seems like there was a problem with the network. The external HDD is mounted
 through iSCSI on the server. When I came to work this morning though I could
 touch and delete files without problems so the iSCSI connection was still
 there. Could small network problems cause vchanger/bacula to stop working?
 I'd like to find out if the problem is bacula or vchanger related. I know
 that probably noone is using vchanger with iSCSI but I hope someone might
 have an idea on how to solve this (it is not the first time we had this kind
 of problem).
 This is bacula 5.0.1 (clients 3.0.0) and vchanger 0.8.2.

 Here is the full output of the first job that ran:

 28-Mai 02:05 bacula-dir JobId 9147: Start Backup JobId 9147,
 Job=iq.2010-05-28_02.05.00_47
 28-Mai 02:05 bacula-dir JobId 9147: Using Device usb-changer-1-drive-0
 28-Mai 02:05 iq-fd JobId 9147: shell command: run ClientBeforeJob
 /etc/init.d/rc_domino stop
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Switching to notes
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Stopping Domino for xLinux
 (notes)
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting for shutdown to
 complete
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 10 seconds
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 20 seconds
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 30 seconds
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 40 seconds
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 50 seconds
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: ..done
 28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Domino for xLinux (notes)
 shutdown completed
 28-Mai 02:05 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 15,
 drive 0 command.
 28-Mai 02:05 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 16,
 drive 0 command.
 28-Mai 02:05 bacula-sd JobId 9147: 3305 Autochanger load slot 16, drive 0,
 status is OK.
 28-Mai 02:05 bacula-sd JobId 9147: Volume usbchanger1_0005_0016 previously
 written, moving to end of data.
 28-Mai 02:05 bacula-sd JobId 9147: Ready to append to end of Volume
 usbchanger1_0005_0016 size=2781850256
 28-Mai 02:12 bacula-sd JobId 9147: User defined maximum volume capacity
 4,831,838,208 exceeded on device usb-changer-1-drive-0
 (/home/bacula/working/usbchanger1/0/drive0).
 28-Mai 02:12 bacula-sd JobId 9147: End of medium on Volume
 usbchanger1_0005_0016 Bytes=4,831,783,484 Blocks=74,900 at 28-Mai-2010
 02:12.
 28-Mai 02:12 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 16,
 drive 0 command.
 28-Mai 02:12 bacula-dir JobId 9147: There are no more Jobs associated with
 Volume usbchanger1_0005_0032. Marking it purged.
 28-Mai 02:12 bacula-dir JobId 9147: All records pruned from Volume
 usbchanger1_0005_0032; marking it Purged
 28-Mai 02:12 bacula-dir JobId 9147: Recycled volume usbchanger1_0005_0032
 28-Mai 02:12 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? drive
 0 command.
 28-Mai 02:12 bacula-sd JobId 9147: 3302 Autochanger loaded? drive 0,
 result: nothing loaded.
 28-Mai 02:12 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 32,
 drive 0 command.
 28-Mai 02:12 bacula-sd JobId 9147: 3305 Autochanger load slot 32, drive 0,
 status is OK.
 28-Mai 02:12 bacula-sd JobId 9147: Recycled volume usbchanger1_0005_0032
 on device usb-changer-1-drive-0
 (/home/bacula/working/usbchanger1/0/drive0), all previous data lost.
 28-Mai 02:12 bacula-sd JobId 9147: New volume usbchanger1_0005_0032
 mounted on device usb-changer-1-drive-0
 (/home/bacula/working/usbchanger1/0/drive0) at 28-Mai-2010 02:12.
 28-Mai 02:27 bacula-sd JobId 9147: User defined maximum volume capacity
 4,831,838,208 exceeded on device usb-changer-1-drive-0
 (/home/bacula/working/usbchanger1/0/drive0).
 28-Mai 02:27 bacula-sd JobId 9147: End of medium on Volume
 usbchanger1_0005_0032 Bytes=4,831,819,893 Blocks=74,898 at 28-Mai-2010
 02:27.
 28-Mai 02:27 bacula-sd JobId 9147: 3307 Issuing autochanger unload slot 32,
 drive 0 command.
 28-Mai 02:27 bacula-dir JobId 9147: There are no more Jobs 

Re: [Bacula-users] [Bacula-devel] bconsole scripting

2010-05-28 Thread Olaf Zevenboom
Kern Sibbald wrote:
 On Thursday 27 May 2010 17:23:46 Morty Abzug wrote:
   
 On Tue, May 25, 2010 at 05:01:52PM +0200, Kern Sibbald wrote:
 
 I don't think there is any need for either a -e option nor a -b option. 
 Both can easily be done via the shell.
   
  -e can indeed easily be done from the shell.  But the caller isn't
 always a shell environment.  It's usually possible to workaround this,
 of course, with something like sh -c echo mount | bconsole; -e would
 just make this easier.

  -b cannot easily be done by the shell.  The intent is to remove
 formatting done for human convenience.
 

 I have been doing *lots* of scripting of bconsole for 8 years no with no 
 problems, so I really don't understand the problem.

 What do you mean by remove formatting done for human convenience?
   

Not sure what *he* means but I can guess: mysql has various options to 
call mysql without formatting  output or with a special way of 
formatting output when used from commandline/script. To relate this to 
Bacula: list media  scripted through bconsole: echo list media | bconsole
This produces a list of media formatted in an ascii-table with columns, 
fields etc marked up by + ,- and | characters.
Comparing this to mysql again: mysql allows the removal of 
column/field-names and other layouting. bconsole does not. (see man mysql)
This results in the need of getting rid of those from within the script. 
Fine if you are using perl, python, php etc but a bit more troublesome 
if you are scripting in Bash. (cut, grep, awk, sed can help of course). 
So the point being: bconsole's output must be stripped on every occasion 
which leads to more coding in scripts. Depending on the scripting 
language this is hard or easy but needed nevertheless.

Olaf


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Antwort: Re: backup failed, network problems, vchanger or bacula?

2010-05-28 Thread C . Keschnat
John Drescher dresche...@gmail.com wrote on 28.05.2010 13:53:46:

 John Drescher dresche...@gmail.com 
 28.05.2010 13:53
 
 An
 
 c.kesch...@internet-mit-iq.de
 
 Kopie
 
 bacula-users@lists.sourceforge.net
 
 Thema
 
 Re: [Bacula-users] backup failed, network problems, vchanger or  bacula?
 
 2010/5/28  c.kesch...@internet-mit-iq.de:
  Today our backups failed. It seemed to run fine until some point, then 
it
  stopped with a changer error. There was also this error:
  ptrace: Operation not permitted.
  /home/bacula/working/19031: No such file or directory.
  $1 = 0
  /home/bacula/etc/btraceback.gdb:2: Error in sourced command file:
  No symbol exename in current context.
 
  But I don't know if that is related.
 
  All the Joboutputs have these lines in them:
  28-Mai 02:53 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? 
drive
  0 command.
  28-Mai 02:53 bacula-sd JobId 9147: 3991 Bad autochanger loaded? drive 
0
  command: ERR=Child exited with code 1.
  Results=Could not write loaded0 file on /mnt/usbchanger1/mag
 
  The first job wrote some data and also had this lines in the output:
  28-Mai 02:53 iq-fd JobId 9147: Fatal error: backup.c:944 Network send 
error
  to SD. ERR=Connection reset by peer
 
  Seems like there was a problem with the network. The external HDD is 
mounted
  through iSCSI on the server. When I came to work this morning though I 
could
  touch and delete files without problems so the iSCSI connection was 
still
  there. Could small network problems cause vchanger/bacula to stop 
working?
  I'd like to find out if the problem is bacula or vchanger related. I 
know
  that probably noone is using vchanger with iSCSI but I hope someone 
might
  have an idea on how to solve this (it is not the first time we hadthis 
kind
  of problem).
  This is bacula 5.0.1 (clients 3.0.0) and vchanger 0.8.2.
 
  Here is the full output of the first job that ran:
 
  28-Mai 02:05 bacula-dir JobId 9147: Start Backup JobId 9147,
  Job=iq.2010-05-28_02.05.00_47
  28-Mai 02:05 bacula-dir JobId 9147: Using Device 
usb-changer-1-drive-0
  28-Mai 02:05 iq-fd JobId 9147: shell command: run ClientBeforeJob
  /etc/init.d/rc_domino stop
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Switching to notes
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Stopping Domino for 
xLinux
  (notes)
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting for 
shutdown to
  complete
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 10 
seconds
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 20 
seconds
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 30 
seconds
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 40 
seconds
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob:  ... waiting 50 
seconds
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: ..done
  28-Mai 02:05 iq-fd JobId 9147: ClientBeforeJob: Domino for xLinux 
(notes)
  shutdown completed
  28-Mai 02:05 bacula-sd JobId 9147: 3307 Issuing autochanger unload 
slot 15,
  drive 0 command.
  28-Mai 02:05 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 
16,
  drive 0 command.
  28-Mai 02:05 bacula-sd JobId 9147: 3305 Autochanger load slot 
16,drive 0,
  status is OK.
  28-Mai 02:05 bacula-sd JobId 9147: Volume usbchanger1_0005_0016 
previously
  written, moving to end of data.
  28-Mai 02:05 bacula-sd JobId 9147: Ready to append to end of Volume
  usbchanger1_0005_0016 size=2781850256
  28-Mai 02:12 bacula-sd JobId 9147: User defined maximum volume 
capacity
  4,831,838,208 exceeded on device usb-changer-1-drive-0
  (/home/bacula/working/usbchanger1/0/drive0).
  28-Mai 02:12 bacula-sd JobId 9147: End of medium on Volume
  usbchanger1_0005_0016 Bytes=4,831,783,484 Blocks=74,900 at 
28-Mai-2010
  02:12.
  28-Mai 02:12 bacula-sd JobId 9147: 3307 Issuing autochanger unload 
slot 16,
  drive 0 command.
  28-Mai 02:12 bacula-dir JobId 9147: There are no more Jobs associated 
with
  Volume usbchanger1_0005_0032. Marking it purged.
  28-Mai 02:12 bacula-dir JobId 9147: All records pruned from Volume
  usbchanger1_0005_0032; marking it Purged
  28-Mai 02:12 bacula-dir JobId 9147: Recycled volume 
usbchanger1_0005_0032
  28-Mai 02:12 bacula-sd JobId 9147: 3301 Issuing autochanger loaded? 
drive
  0 command.
  28-Mai 02:12 bacula-sd JobId 9147: 3302 Autochanger loaded? drive 0,
  result: nothing loaded.
  28-Mai 02:12 bacula-sd JobId 9147: 3304 Issuing autochanger load slot 
32,
  drive 0 command.
  28-Mai 02:12 bacula-sd JobId 9147: 3305 Autochanger load slot 
32,drive 0,
  status is OK.
  28-Mai 02:12 bacula-sd JobId 9147: Recycled volume 
usbchanger1_0005_0032
  on device usb-changer-1-drive-0
  (/home/bacula/working/usbchanger1/0/drive0), all previous data lost.
  28-Mai 02:12 bacula-sd JobId 9147: New volume usbchanger1_0005_0032
  mounted on device usb-changer-1-drive-0
  (/home/bacula/working/usbchanger1/0/drive0) at 28-Mai-2010 02:12.
  28-Mai 02:27 bacula-sd JobId 9147: User defined maximum 

Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Robert LeBlanc
On Fri, May 28, 2010 at 12:32 AM, Eric Bollengier 
eric.bolleng...@baculasystems.com wrote:

 Hello Robert,
 What would be the result if you do Incremental backup instead of full
 backup ?
 Imagine that you have 1% changes by day, it will give something like
 total_size = 30GB + 30GB*0.01 * nb_days
 (instead of 30GB * nb_days)

 I'm quite sure it will give a compression like 19:1 for 20 backups...

 This kind of comparison is the big argument of dedup companies, do 20 full
 backup, and you will have 20:1 dedup ratio, but do 19 incremental + 1 full
 and this ratio will fall down to 1:1... (It's not exactly true neither
 because
 you can save space with multiple systems having same data)


The idea was to in some ways simulate a few things all at once. This kind of
test could show how multiple similar OSes could dedupe (20 Windows OS for
example, you only have to store those bits once for any number of Windows
machines), using Bacula's incrementals, you have to store the bits once per
machine and then again when you do your next full each week or month. It
also was to show how much you could save when doing your fulls each week or
month, a similar effect would happen for the differentials too. It wasn't
meant to be all inclusive, but just to show some trends that I was
interested in. In our environment, since everything is virtual, we don't
save the OS data, and only try to save the minimum that we need, that
doesn't work for everyone though.


  [image: backup.png]
 
  This chart shows that using the sync method, the data's compression grew
 in
  almost a linear fashion, while the Bacula data stayed close to 1x
  compression. My suspicion is that since the Bacula tape format inserts
 job
  information regularly into the stream file and lessfs uses a fixed block
  size, lessfs is not able to find much unique data in the Bacula
  stream.

 You are right, we have a current project to add a new device format that
 will
 be able to be compatible with dedup layer. I don't know yet how it will
 work
 because I can imagine that each dedup system works differently, and finding
 a
 common denominator won't be easy. A first proof of concept will certainly
 use
 LessFS (It is already in my radar scope). But as you said, depending on
 block
 size, alignment, etc... it's not so easy.


I think in some ways, each dedupe file system can work very well with each
file as it's own instead of being in a stream. That way the start of the
file is always on a boundary that the deduplication file system uses. I
think you might be able to use sparse files for a stream and always sparse
up the block alignment, that would make the stream file look really large
compared to what it actually uses on a non deduped file system. I still
think if Bacula lays the data down in the same file structure as on the
client organized by jobID with some small bacula files to hold permissions,
etc I think it would be the most flexible for all dedupe file systems
because it would be individual files like they are expecting.


  Although Data Domain's variable block size feature allows it much
  better compression of Bacula data, rsync still achieved an almost 2x
  greater compression over Bacula.

 The compression on disk is better, on the network layer and the remote IO
 disk
 system, this is an other story. BackupPC is smarter on this part (but have
 problems with big set of files).


I'm not sure I understand exactly what you mean. I understand that BacupPC
can cause a file system to not mount because it exhausts the number of hard
links the fs can support. Luckly, with deduplication file system, you don't
have this problem, because you just copy the bits and the fs does the work
of finding the duplicates. A dedupe fs can even only store a small part of a
file (if most of the file is duplicate and only a small part is unique)
where BackupPC would have to write that whole file. I don't want Bacula to
adopt what BackupPC is doing, I think it's a step backwards.


  In conclusion, lessfs is a great file system and can benefit from
 variable
  block sizes, if it can be added, for both regular data and Bacula data.
  Bacula could also greatly benefit by providing a format similar to a
 native
  file system on lessfs and even a good benefit on DataDomain.

 Yes, variable block size and dynamic alignment seems the edge of the
 technology, but it's also heavily covered by patents (and those companies
 are
 not very friendly). And I can imagine that it's easy to ask for them, and
 it's
 a little more complex to implement :-)


One of the reasons I mentioned if it could be implemented. If there is
anything I know about OSS, is that there are some amazing people with an
ability to think so outside the box that these things have not been able to
stop the progress of OSS.

Robert LeBlanc
Life Sciences  Undergraduate Education Computer Support
Brigham Young University
--


Re: [Bacula-users] Pool file/job retention not updating

2010-05-28 Thread Martin Simmons
 On Fri, 28 May 2010 09:29:52 +0200, Machiel van Veen said:
 
 On Thursday 27 May 2010 08:36:05 pm Martin Simmons wrote:
   On Thu, 27 May 2010 14:40:13 +0200, Machiel van Veen said:
  
   But when I do show pool=Default in bconsole I get:
  
   JobRetention=0 secs FileRetention=15 years 4 months 1 day 11 hours 46
   mins 16 secs
  
  It is a bug in the show pool command in the current releases.  I don't know
   if it will be fixed in the next release because the latest source is no
   longer available.
  
  __Martin
 
 Thanks for your reply, however that does mean the having the file and job 
 retention in the pool directive does not seem to function as I 
 hoped/understood.
 
 I have a setup running two jobs for one client. Both jobs have their own pool 
 with their own retention times. But when the job with the shortest retention 
 time runs it also prunes jobs and file records from the other job with the 
 longer retention times set.
 
 Is it correct I need to create two clients with their own configurations in 
 order to have two jobs with diferent retention times? Pruning is done based 
 on 
 the client, not the job and/or pool?

Setting up two clients is the only safe way to do it.

The JobRetention and FileRetention in the pool do override the settings in the
client, but not in a useful way.  The problem is that autopruning runs after
every job, but it uses the retention time from the job's pool or client.  That
single retention time is applied to all other backups for the client, even
those from other pools.

__Martin

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Status of rescue CD/USB project?

2010-05-28 Thread Alan Brown

Does anyone know what the current status of the recue CD/USB project is?

The last updates seem to be a couple of years old.




--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Eric Bollengier
Le vendredi 28 mai 2010 16:42:01, Robert LeBlanc a écrit :
 On Fri, May 28, 2010 at 12:32 AM, Eric Bollengier 
 
 eric.bolleng...@baculasystems.com wrote:
  Hello Robert,
  What would be the result if you do Incremental backup instead of full
  backup ?
  Imagine that you have 1% changes by day, it will give something like
  total_size = 30GB + 30GB*0.01 * nb_days
  (instead of 30GB * nb_days)
  
  I'm quite sure it will give a compression like 19:1 for 20 backups...
  
  This kind of comparison is the big argument of dedup companies, do 20
  full backup, and you will have 20:1 dedup ratio, but do 19 incremental
  + 1 full and this ratio will fall down to 1:1... (It's not exactly true
  neither because
  you can save space with multiple systems having same data)
 
 The idea was to in some ways simulate a few things all at once. This kind
 of test could show how multiple similar OSes could dedupe (20 Windows OS
 for example, you only have to store those bits once for any number of
 Windows machines), using Bacula's incrementals, you have to store the bits
 once per machine

In this particular case, you can use the BaseJob file level deduplication that 
allows you to store only one version of each OS. (But I admit that if the 
system can do it automatically, it's better)


 and then again when you do your next full each week or month.

Why do you want to schedule Full backup every weeks? With Accurate option, you 
can adopt the Incremental forever (Differential can limit the number of 
incremental to use for restore)

If it's to have multiple copies of a particular file (what I like to advise 
when using tapes), since the deduplication will turn multiple copies to a 
single instance, I think that it's very similar.

 It also was to show how much you could save when doing your fulls
 each week or month, a similar effect would happen for the differentials
 too. It wasn't meant to be all inclusive, but just to show some trends
 that I was interested in.

Yes, but comparing 20 full backup with 20 full copies with deduplication is 
like comparing apples and oranges... At least, it should appear somewhere that 
you choose the worst case for bacula and the best case for deduplication :-)

 In our environment, since everything is virtual,
 we don't save the OS data, and only try to save the minimum that we need,
 that doesn't work for everyone though.

Yes, this is an other very common way to do, and I agree that sometime you 
can't do that.

It's also very practical to just rsync the whole disk and let LessFS do it's 
job. If you want to browse the backup, it's just a directory. With Bacula, as
incremental/full/differential are presented in a virtual tree, it's not 
needed.

 
   [image: backup.png]
   
   This chart shows that using the sync method, the data's compression
   grew
  
  in
  
   almost a linear fashion, while the Bacula data stayed close to 1x
   compression. My suspicion is that since the Bacula tape format inserts
  
  job
  
   information regularly into the stream file and lessfs uses a fixed
   block size, lessfs is not able to find much unique data in the Bacula
   stream.
  
  You are right, we have a current project to add a new device format that
  will
  be able to be compatible with dedup layer. I don't know yet how it will
  work
  because I can imagine that each dedup system works differently, and
  finding a
  common denominator won't be easy. A first proof of concept will certainly
  use
  LessFS (It is already in my radar scope). But as you said, depending on
  block
  size, alignment, etc... it's not so easy.
 
 I think in some ways, each dedupe file system can work very well with each
 file as it's own instead of being in a stream. That way the start of the
 file is always on a boundary that the deduplication file system uses. I
 think you might be able to use sparse files for a stream and always sparse
 up the block alignment,

I'm not very familiar with sparse file, but I'm pretty sure that the sparse 
unit is a block. So, if a block is empty ok, but if you have some bytes used 
inside this block, it will take 4KB.

 that would make the stream file look really large
 compared to what it actually uses on a non deduped file system. I still
 think if Bacula lays the data down in the same file structure as on the
 client organized by jobID with some small bacula files to hold permissions,
 etc I think it would be the most flexible for all dedupe file systems
 because it would be individual files like they are expecting.

Yes, this was a way to do, but we still have the problem for alignment and 
free space in blocks. If I'm remember well, LessFS uses LZO to compress data, 
so we can imagine that a 4KB block with only 200 bytes should be very small at 
the end. This could be a very interesting test, just write X blocks with 200 
bytes (random), and see if it takes X*4KB or ~ X*compress(200bytes).

It will allows also to store metadata in special blocs. So the basic 
modification will 

[Bacula-users] Windows client silent installation

2010-05-28 Thread uminds

Thanks, 

I would like to do this in a complete automatic way. That means end users don't 
need to interact with the installation. How do I get rid of the warning message 
then?

+--
|This was sent by ng_ke...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows client silent installation

2010-05-28 Thread Heitor Faria
Its a bug... Open a bug report. =P

Regards,

Heitor Faria

uminds wrote:
 Thanks, 

 I would like to do this in a complete automatic way. That means end users 
 don't need to interact with the installation. How do I get rid of the warning 
 message then?

 +--
 |This was sent by ng_ke...@yahoo.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] full LTO-4 tape @ only 176 GIG

2010-05-28 Thread lweinlan

Hello, I am running backups using bacula and a LTO-4 tape library. Everything 
is running fine, but this morning, the first LTO-4 tape that I used is marked 
as volume status = full with only 176.24 GB ???
A LTO-4 tape has a capacity of 800 GB native, and 1.6 TB compressed .
I have absoultely no idea why this tape is already full. Could someone please 
help. What are the modifications that need to be made on the bacula-dir.conf 
file to fix this ???

Thank you

+--
|This was sent by lwein...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full LTO-4 tape @ only 176 GIG

2010-05-28 Thread John Drescher
On Fri, May 28, 2010 at 1:16 PM, lweinlan
bacula-fo...@backupcentral.com wrote:

 Hello, I am running backups using bacula and a LTO-4 tape library. Everything 
 is running fine, but this morning, the first LTO-4 tape that I used is marked 
 as volume status = full with only 176.24 GB ???
 A LTO-4 tape has a capacity of 800 GB native, and 1.6 TB compressed .
 I have absoultely no idea why this tape is already full. Could someone please 
 help. What are the modifications that need to be made on the bacula-dir.conf 
 file to fix this ???


This is usually caused by an error. When bacula encounters a write
error it assumes the tape is full at that point.


John

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Robert LeBlanc
On Fri, May 28, 2010 at 10:48 AM, Eric Bollengier 
eric.bolleng...@baculasystems.com wrote:

First, thank you for the kind replies, this is helping me to ensure I see
the big picture.

Le vendredi 28 mai 2010 16:42:01, Robert LeBlanc a écrit :
  On Fri, May 28, 2010 at 12:32 AM, Eric Bollengier 
 
  eric.bolleng...@baculasystems.com wrote:
   Hello Robert,
   What would be the result if you do Incremental backup instead of full
   backup ?
   Imagine that you have 1% changes by day, it will give something like
   total_size = 30GB + 30GB*0.01 * nb_days
   (instead of 30GB * nb_days)
  
   I'm quite sure it will give a compression like 19:1 for 20 backups...
  
   This kind of comparison is the big argument of dedup companies, do 20
   full backup, and you will have 20:1 dedup ratio, but do 19 incremental
   + 1 full and this ratio will fall down to 1:1... (It's not exactly true
   neither because
   you can save space with multiple systems having same data)
 
  The idea was to in some ways simulate a few things all at once. This kind
  of test could show how multiple similar OSes could dedupe (20 Windows OS
  for example, you only have to store those bits once for any number of
  Windows machines), using Bacula's incrementals, you have to store the
 bits
  once per machine

 In this particular case, you can use the BaseJob file level deduplication
 that
 allows you to store only one version of each OS. (But I admit that if the
 system can do it automatically, it's better)


I agree, I haven't looked into BaseJobs yet because they are not the easiest
thing to understand. Since I've very pressed for time, I don't have a lot of
time to commit to reading. I plan on understanding it, but when a system can
do it automatically and transparently, I like that a lot.


  and then again when you do your next full each week or month.

 Why do you want to schedule Full backup every weeks? With Accurate option,
 you
 can adopt the Incremental forever (Differential can limit the number of
 incremental to use for restore)

 If it's to have multiple copies of a particular file (what I like to advise
 when using tapes), since the deduplication will turn multiple copies to a
 single instance, I think that it's very similar.


We are using accurate jobs on a few machines, however, I have not scheduled
the roll-ups yet as I haven't had time to read the manual enough. I need to
do it soon as I have months of incrementals without any fulls in between. I
do like having multiple copies of my files on tape, on disk not so much. The
reason is I've had tapes go bad, with disk, I have a lot of redundancy built
in.

 It also was to show how much you could save when doing your fulls
  each week or month, a similar effect would happen for the differentials
  too. It wasn't meant to be all inclusive, but just to show some trends
  that I was interested in.

 Yes, but comparing 20 full backup with 20 full copies with deduplication is
 like comparing apples and oranges... At least, it should appear somewhere
 that
 you choose the worst case for bacula and the best case for deduplication
 :-)


Please remember that the bacula tape files were on a lessfs file system, so
the same amount of data was written using rsync and bacula, just different
formats on lessfs. So best case senario is that they should have had the
same dedupe rate. The idea was to see how both formats faired on lessfs.


  In our environment, since everything is virtual,
  we don't save the OS data, and only try to save the minimum that we need,
  that doesn't work for everyone though.

 Yes, this is an other very common way to do, and I agree that sometime you
 can't do that.

 It's also very practical to just rsync the whole disk and let LessFS do
 it's
 job. If you want to browse the backup, it's just a directory. With Bacula,
 as
 incremental/full/differential are presented in a virtual tree, it's not
 needed.


Understandable, in a disaster recovery instance with Bacula, if the on disk
format was a tree, you could browse to the lastest backup of your catalog
and import it and off you go. Right now, I have no clue which of the 100
tapes I have has the latest catalog backup, I would have to scan them all,
and if the backup spans tapes, I have to figure out what order to scan the
tapes to recover the back-up, that could take forever. Now, that I've
thought about it, I think it's time for a new pool for catalog backups,
sigh.

 I think in some ways, each dedupe file system can work very well with each
  file as it's own instead of being in a stream. That way the start of the
  file is always on a boundary that the deduplication file system uses. I
  think you might be able to use sparse files for a stream and always
 sparse
  up the block alignment,

 I'm not very familiar with sparse file, but I'm pretty sure that the
 sparse
 unit is a block. So, if a block is empty ok, but if you have some bytes
 used
 inside this block, it will take 4KB.


I'm not an expert with sparse 

Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Phil Stracchino
On 05/28/10 13:24, Robert LeBlanc wrote:
 I agree, I haven't looked into BaseJobs yet because they are not the
 easiest thing to understand. Since I've very pressed for time, I don't
 have a lot of time to commit to reading. I plan on understanding it, but
 when a system can do it automatically and transparently, I like that a lot.

The basic concept behind base jobs is that you define one machine (or
the base OS install image on that machine) as a reference install for a
class of similar machines, and do a full backup of *it*, but then for
the other machines in the class, you back up only user data plus any
base system files that are different from those on the reference machine.

Once I have all of my Windows boxes on the same version of Windows again
(right now, half are XP Pro and half are 2K Pro), I'm planning to set up
a base job for them.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full LTO-4 tape @ only 176 GIG

2010-05-28 Thread Martin Simmons
 On Fri, 28 May 2010 13:16:33 -0400, lweinlan  said:
 
 Hello, I am running backups using bacula and a LTO-4 tape
 library. Everything is running fine, but this morning, the first LTO-4 tape
 that I used is marked as volume status = full with only 176.24 GB ???
 
 A LTO-4 tape has a capacity of 800 GB native, and 1.6 TB compressed .
 
 I have absoultely no idea why this tape is already full. Could someone
 please help. What are the modifications that need to be made on the
 bacula-dir.conf file to fix this ???

The Bacula log file is the first place to look, to see if it says why the tape
was marked as full.

Also check the system log files/console for error messages.

__Martin

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Robert LeBlanc
On Fri, May 28, 2010 at 11:49 AM, Phil Stracchino ala...@metrocast.netwrote:

 On 05/28/10 13:24, Robert LeBlanc wrote:
  I agree, I haven't looked into BaseJobs yet because they are not the
  easiest thing to understand. Since I've very pressed for time, I don't
  have a lot of time to commit to reading. I plan on understanding it, but
  when a system can do it automatically and transparently, I like that a
 lot.

 The basic concept behind base jobs is that you define one machine (or
 the base OS install image on that machine) as a reference install for a
 class of similar machines, and do a full backup of *it*, but then for
 the other machines in the class, you back up only user data plus any
 base system files that are different from those on the reference machine.

 Once I have all of my Windows boxes on the same version of Windows again
 (right now, half are XP Pro and half are 2K Pro), I'm planning to set up
 a base job for them.


That will be near impossible for my Linux servers, they all seem to be at
different patch levels. I guess it would do ok for my Windows machines, but
if I need bare-metal, there is a reason because the OS was configured
differently than the standard. I think this is where dedup could really
provide a benefit.

When you patch your servers, do you have to redo your base at the same time
to keep it synced?

Robert LeBlanc
Life Sciences  Undergraduate Education Computer Support
Brigham Young University
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Status of rescue CD/USB project?

2010-05-28 Thread Martin Simmons
 On Fri, 28 May 2010 17:35:13 +0100, Alan Brown said:
 
 Does anyone know what the current status of the recue CD/USB project is?
 
 The last updates seem to be a couple of years old.

There is a bacula-rescue-5.0.2.tar.gz in the downloads dated 2010-04-30.

I don't know if it works, but at least someone released it :-)

__Martin

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Sqlite3 to PostgreSQL 8.4 catalogue migration

2010-05-28 Thread Paul Mather
Does anyone have a working script to migrate a Sqlite3 catalogue database to 
PostgreSQL 8.4.4?

I'm using a very recent FreeBSD 8-STABLE and the sqlite2pgsql script in the 
examples/database directory of the source code doesn't work for me.  Has anyone 
got this to work successfully under a current Bacula installation on FreeBSD?

I'm using these versions of the ports:

bacula-server-5.0.0
postgresql-client-8.4.4
postgresql-server-8.4.4
sqlite3-3.6.23.1_1

My current Bacula server is backing up to disk and, being a relatively recent 
install, none of the volumes have been recycled.  So, as a way of getting a 
working PostgreSQL catalogue in lieu of a non-working sqlite2pgsql script, I 
thought I would run bscan -s to recover catalogue information from the volumes 
(having first run create_postgresql_database, make_postgresql_tables, and 
grant_postgresql_privileges).  Will this recreate the catalogue entirely, or 
will I be missing something other than log data?

Cheers,

Paul.
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems

2010-05-28 Thread Phil Stracchino
On 05/28/10 14:04, Robert LeBlanc wrote:
 When you patch your servers, do you have to redo your base at the same
 time to keep it synced?

I don't think so, though if it's a major update, you would probably want
to do so.  On the reference client, it should be handled just like any
other incremental change.

-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] bconsole scripting

2010-05-28 Thread Kern Sibbald
On Friday 28 May 2010 14:03:42 Olaf Zevenboom wrote:
 Kern Sibbald wrote:
  On Thursday 27 May 2010 17:23:46 Morty Abzug wrote:
  On Tue, May 25, 2010 at 05:01:52PM +0200, Kern Sibbald wrote:
  I don't think there is any need for either a -e option nor a -b option.
  Both can easily be done via the shell.
 
   -e can indeed easily be done from the shell.  But the caller isn't
  always a shell environment.  It's usually possible to workaround this,
  of course, with something like sh -c echo mount | bconsole; -e would
  just make this easier.
 
   -b cannot easily be done by the shell.  The intent is to remove
  formatting done for human convenience.
 
  I have been doing *lots* of scripting of bconsole for 8 years no with no
  problems, so I really don't understand the problem.
 
  What do you mean by remove formatting done for human convenience?

 Not sure what *he* means but I can guess: mysql has various options to
 call mysql without formatting  output or with a special way of
 formatting output when used from commandline/script. To relate this to
 Bacula: list media  scripted through bconsole: echo list media | bconsole
 This produces a list of media formatted in an ascii-table with columns,
 fields etc marked up by + ,- and | characters.
 Comparing this to mysql again: mysql allows the removal of
 column/field-names and other layouting. bconsole does not. (see man mysql)
 This results in the need of getting rid of those from within the script.
 Fine if you are using perl, python, php etc but a bit more troublesome
 if you are scripting in Bash. (cut, grep, awk, sed can help of course).
 So the point being: bconsole's output must be stripped on every occasion
 which leads to more coding in scripts. Depending on the scripting
 language this is hard or easy but needed nevertheless.

OK, I see.  

bconsole (actually Bacula) has a good number of commands, known as dot 
commands, that produce output in a machine friendly format rather than a 
human friendly format.  Please read the manual.  The commands you want are 
probably already implemented.

Kern

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users