Re: [Bacula-users] Debian Squeeze (6, current) to Wheezy (7, testing) broke mtx

2012-07-29 Thread Jim Barber
On 6/07/2012 12:37 PM, comport3 wrote:
> Jim, you are truly a life saver! Thank you very, very much!!!

FYI, Debian has just closed this bug today.
Version 175-4 of the udev package will re-enable the sg devices again.
It isn't in Debian testing yet as far as I can see, but I guess will filter 
down in a few days.

Regards,

Jim Barber
Senior Systems Administrator
Primary Health Care Limited
Innovations & Technology Group


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Debian Squeeze (6, current) to Wheezy (7, testing) broke mtx

2012-07-05 Thread Jim Barber
On 6/07/2012 5:41 AM, comport3 wrote:
> Hi All,
> 
> Performing a test upgrade of our Debian Squeeze server to the new Wheezy 
> release has broken the mtx package.
> 
> MTX relies on a /dev/sg* device (under Squeeze, the autochanger was /dev/sg4) 
> - however under Wheezy there is only /dev/sch0.
> 
> Trying to run mtx against it returns:
> 
> mtx -f /dev/sch0
> /dev/sch0 is not an sg device, or old sg driver
> 
> lsscsi -g
> [0:0:32:0]   enclosu DP   BACKPLANE1.05  -  -
> [0:2:0:0]diskDELL PERC 6/i 1.22  /dev/sda   -
> [1:0:0:0]cd/dvd  HL-DT-ST DVD-ROM GDR-T10N 1.02  /dev/sr0   -
> [3:0:17:0]   tapeIBM  ULT3580-TD4  B710  /dev/st0   -
> [3:0:17:1]   mediumx IBM  3573-TL  9t30  /dev/sch0  -
> [3:0:18:0]   tapeIBM  ULT3580-TD4  B710  /dev/st1   -
> 
> 
> This is possibly a link to someone encountering the same problem: 
> https://groups.google.com/forum/?fromgroups#!topic/linux.debian.bugs.dist/6thALMZOr8k
> 
> Anyone got any ideas?

Yes, I raised that Debian bug/wish-list item:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=657948

I have just held my udev package at the version that isn't broken (Debian 
package version 172-1).
But for a fresh install that wouldn't be an option.

The change removed the following lines from  /lib/udev/rules.d/80-drivers.rules

SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST!="[module/sg]", \
RUN+="/sbin/modprobe -b sg"

You you probably put that rule into your own local file underneath the 
/etc/udev/rule.d/ directory.
Failing that, manually running '/sbin/modprobe -b sg' at the right time would 
probably also do the trick.

Regards,

--
Jim Barber
DDI Health

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] One Copy Job for ALL Clients?

2011-09-11 Thread Jim Barber
On 12/09/2011 10:26 AM, Rodrigo Renie Braga wrote:
> Hello everyone.
> 
> In my first attempt using Copy Jobs, I was creating one Copy Job for each of 
> my Clients, with the following SQL Selection Pattern:
> 
> SELECT max(JobId)
>  FROM Job
> WHERE Name = 'someClientJobName'
>   AND Type = 'B'
>   AND Level = 'F'
>   AND JobStatus in ('T', 'W')
>   AND JobBytes > 0;
> 
> Basically, every Client had 2 Jobs: one for a normal daily Backup and one for 
> copying it's last Full Backup, using the above SQL Query.
> 
> My question is: do I really need to create a different Copy Job for every 
> Client or can I use only ONE Copy Job that Selects the
> latest Full Backup JobID of all my clients?

I have my full backups go to their own pool.
Then I have just one copy job defined like so:

Job {
  Name = "OffsiteBackup"
  JobDefs = "DefaultJob"
  Type = Copy
  Level = Full
  # Uses the 'Next Pool' definition from FullPool for where to write the copies 
to.
  Pool = FullPool
  # Use SQL to select the most recent (successful) Full backup for each job 
written to the FullPool pool.
  Selection Type = SQLQuery
  Selection Pattern = "SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level = 
'F' and Job.Type = 'B' and Job.JobStatus = 'T' and
Pool.Name = 'FullPool' and Job.PoolId = Pool.PoolId GROUP BY Job.Name ORDER BY 
MAX(Job.JobId);"
  Allow Duplicate Jobs = yes
  Allow Higher Duplicates = no
}

So that SQL formatted over multiple lines for easier reading is:

SELECT
MAX(Job.JobId)
FROM
Job, Pool
WHERE
Job.Level = 'F'
    and Job.Type = 'B'
and Job.JobStatus = 'T'
and Pool.Name = 'FullPool'
and Job.PoolId = Pool.PoolId
GROUP BY
Job.Name
ORDER BY
MAX(Job.JobId);

Regards,

--
Jim Barber
DDI Health

--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SQLite - no longer supported

2011-03-15 Thread Jim Barber
On 26/02/2011 11:50 AM, Dan Langille wrote:
> On 2/25/2011 10:54 AM, C M Reinehr wrote:
>> On Thu 24 February 2011 07:27:48 pm Dan Langille wrote:
>>> On 2/24/2011 10:08 AM, C M Reinehr wrote:
>>>> On Wed 23 February 2011 10:30:11 pm Dan Langille wrote:
>>>>> A recent bug report (http://bugs.bacula.org/view.php?id=1688) which
>>>>> indicated that we are no longer supporting SQLite.
>>>>>
>>>>> I feel it's safe to stop regression testing against it.
>>>> I'm sorry to hear that. I've been using SQLite for the entire time that
>>>> I've been using Bacula -- ten years or so -- with absolutely no
>>>> difficulty. Admitedly, my needs are limited -- three servers&   eight pc's
>>>> -- but it's been simple to administer&   troubleshoot and reliable.
>>> What version of Bacula are you using now?  What version of SQLite?
>> I stick with Debian Stable and still am running Lenny. The Bacula version is
>> 2.4.4 and the SQLite version is 3.5.9. In the near future I'll be upgrading
>> to Squeeze and it looks as if that will be Bacula v5.0.2&  SQLite v3.7.3.
>> (So, in any case, I guess I'm good for another two years or so, until the
>> next Debian Stable is released! ;-)
> I've been running daily regression tests for MySQL, PostgreSQL, and 
> SQLite.  The latter crashes far more than the other two.  Look at 
> http://regress.bacula.org/ and look for langille.  You'll see.

I recently switched from SQLite3 to PostgreSQL on a Debian system.
I was already using the Bacula 5.0.2 packages, and moved from SQLite3 3.7.5 to 
PostgreSQL 9.0.3.
Attached is a perl script I wrote to do the migration with.
The scripts in the Bacula source examples area did not work for me.
I have only tested it on my system, so it may not work for you.
I haven't had a system that has gone through lots of previous versions of 
Bacula that could have old unused tables in the database
etc...
Use at your own risk.

The steps to do the conversion are:

- Make sure you have plenty of disk space somewhere.
  You'll need enough for your SQLite3 dump; plus approximately that size again 
for data file produced by the conversion; plus enough
space for the PostgreSQL database.
  My SQLite3 dump was 7GB which produced an 11GB PostgreSQL database.

- Keep a safe copy of your configuration files in the /etc/bacula/ directory.

- As the bacula user, dump the SQLite3 database by running: 
/etc/bacula/scripts/make_catalog_backup.pl MyCatalog
  You can become the bacula user by running: su -s /bin/bash - bacula

- Move the /var/lib/bacula/bacula.sql dump file produced from the step above to 
where you have plenty of disk space.

- Remove (or purge) your bacula-common-sqlite3, bacula-director-sqlite3 and 
bacula-sd-sqlite3 packages via apt-get, or aptitude, or
whatever tool you use to manage your Debian packages with.

- Install the postgresql, bacula-common-pgsql, bacula-director-pgsql, 
bacula-sd-pgsql packages.
  This creates the bacula database for you with all the correct tables etc.

- Shut down the newly installed bacula director and storage daemons.
  eg. service bacula-director stop ; service bacula-sd stop

- In the directory where you moved the bacula.sql file to, as the bacula user, 
run my attached script passing the bacula.sql file as
a parameter.
  eg. ./bacula_sqlite3_to_pgsql.pl bacula.sql

- The conversion process will begin and it can take a long time.

- If successful you shouldn't see any errors at all.
  If not, you'll have to address the errors by fixing the script and run it 
again.
  Each time it runs it regenerates the data files, truncates the appropriate 
PostgreSQL tables and loads the data into them again.
  When the data is loaded into the tables, the serial type columns have their 
sequence numbers updated so that new inserted data
will have the correct serial numbers to cause no overlap.

- You'll need to merge your /etc/bacula/ configuration files back into place.
  In my case, the only change to my original configuration files I needed to 
make was to Catalog {} section setting the correct
database parameters.

- Start up the bacula director and storage daemon.

- Test.

I hope this is helpful to someone out there.

Obviously you should probably test this on a virtual machine or something 
before you do it for real.
If all goes bad, you could reinstall your bacula-*-sqlite3 packages again and 
import the bacula.sql dump into them to back out.
Although I didn't try this as my conversion worked okay.

Regards,

--
Jim Barber
DDI Health

#!/usr/bin/perl

=head1 DESCRIPTION

This program filters a Bacula SQLite3 database dump as produced by the 
/etc/bacula/scripts/make_catalog_backup.pl script and turns
it into a format suitable for importing into a 

Re: [Bacula-users] SQL Query Copy Jobs show last Full

2011-03-10 Thread Jim Barber
On 18/02/2011 7:28 PM, Dan Langille wrote:
> On 2/17/2011 8:49 PM, Jim Barber wrote:
>> On 17/02/2011 6:54 PM, Torsten Maus wrote:
>>>
>>> My idea is that I copy the LAST Full Backups of ALL Clients to Tape.
>>> Hence, since I am not familiar with the SQL commands so well, I am
>>> looking for support from your side for such a pattern
>>>
>>> The only "rules" are:
>>>
>>> - I need that last Full Backup of any client, this shall be copied.
>>> Thats it ?
>>>
>>> Can somebody help me with the correct SQL Query syntax ?
>>>
>>
>> I use the following SQL to achieve that:
>>
>> SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level = 'F' and Job.Type
>> = 'B' and Job.JobStatus = 'T' and Pool.Name = 'FullPool' and Job.PoolId
>> = Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;
>>
>> You'll probably want to change 'FullPool' to the name of the pool where
>> you are directing your full backups to.
>>
>> If the pool you are selecting from doesn't matter then you could
>> probably simplify the SQL to be as follows:
>>
>> SELECT MAX(Job.JobId) FROM Job WHERE Job.Level = 'F' and Job.Type = 'B'
>> and Job.JobStatus = 'T' GROUP BY Job.Name ORDER BY Job.JobId;
>
> FYI, this is non-standard SQL.
>
> bacula=# SELECT MAX(Job.JobId) FROM Job WHERE Job.Level = 'F' and Job.Type = 
> 'B' and Job.JobStatus = 'T' GROUP BY Job.Name ORDER
> BY Job.JobId;
> ERROR:  column "job.jobid" must appear in the GROUP BY clause or be used in 
> an aggregate function
> LINE 1: ...nd Job.JobStatus = 'T' GROUP BY Job.Name ORDER BY Job.JobId;
>  ^
> bacula=#
>
> It works under MySQL because MySQL is doing stuff for you under the covers.

Just to finish this topic of with a correct answer...

I was using SQLite3 which worked with the statement I had.
However I've been working on moving over from SQLite3 to PostgreSQL now that 
I've read that the use of SQLite3 is deprecated.
The correct statement to the one above is:

SELECT MAX(Job.JobId) FROM Job WHERE Job.Level = 'F' and Job.Type = 'B' and 
Job.JobStatus = 'T' GROUP BY Job.Name ORDER BY
MAX(Job.JobId);

The only part that has changed is using the MAX() aggregate function in the 
ORDER BY clause.
This works in both SQLite3 and PostgreSQL (and I'd assume MySQL, but I haven't 
tested).

Regards,

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SQL Query Copy Jobs show last Full

2011-02-17 Thread Jim Barber


  
  
On 17/02/2011 6:54 PM, Torsten Maus wrote:

  
  
  My idea is that I copy the LAST Full
Backups of ALL Clients to Tape. Hence, since I am not familiar
with the SQL commands so well, I am looking for support from
your side for such a pattern
   
  The only "rules" are:
   
  - I need that last Full Backup of any
client, this shall be copied. Thats it ?  
   
  Can somebody help me with the correct SQL
Query syntax ?


I use the following SQL to achieve that:

SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level = 'F' and
Job.Type = 'B' and Job.JobStatus = 'T' and Pool.Name = 'FullPool'
and Job.PoolId = Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;

You'll probably want to change 'FullPool' to the name of the pool
where you are directing your full backups to.

If the pool you are selecting from doesn't matter then you could
probably simplify the SQL to be as follows:

SELECT MAX(Job.JobId) FROM Job WHERE Job.Level = 'F' and Job.Type =
'B' and Job.JobStatus = 'T' GROUP BY Job.Name ORDER BY Job.JobId;

Regards,
--
Jim Barber
DDI Health


  


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull using tape drives

2011-01-07 Thread Jim Barber
On 8/01/2011 8:02 AM, penne...@sapo.pt wrote:

> PS : Do you know if replacing full with virtualfull is possible for
> standard disk based storage deamons ?
>
> i taught that one can not read and write simultaneously from the same
> SD.  so you cannot create a virtuallfull from the previous virtualfull
> + incremental as the two virtualfulls are stored in the same pool  of
> the same sd
>
> Penne

I'd imagine that the same technique should still work with disk based pools.
I haven't tried as we don't have the disk space to do so.

I don't think there is a restriction on reading and writing to the same storage 
pool simultaneously.
If there were then my tape solution wouldn't work.
You just can't read and write to the same Volume within a pool.

So when you create a new VirtualFull it will not write to a volume that it 
needs to read from.
When a VirtualFull backup starts, it generates a list of volumes that it needs 
to read from.
It will then choose a volume not in the read list to be the destination.

Regards,

--
Jim Barber
DDI Health

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull using tape drives

2011-01-06 Thread Jim Barber
 any day for at least 2
weeks, so set the retention to 13 days.
#
Pool {
  Name = Default
  Volume Retention = 13 days
  Pool Type = Backup
  # Automatically prune and recycle volumes.
  AutoPrune = yes
  Recycle = yes
  # Do not use tapes who's labels start with CLN since they are
cleaning tapes.
  Cleaning Prefix = "CLN"
  Storage = TL2000
  # Get tapes from scratch pool and return them to the scratch
pool when they are purged.
  Scratch Pool = Scratch
  Recycle Pool = Scratch
  # The location where the VirtualFull backups will be written
to.
  Next Pool = FullPool
}

# Pool used by Full and VirtualFull backups.
# Keep for 3 weeks, so set the retention to 20 days. (3*7-1)
Pool {
  Name = FullPool
  Volume Retention = 20 days
  Pool Type = Backup
  # Automatically prune and recycle volumes.
  AutoPrune = yes
  Recycle = yes
  # Do not use tapes whos labels start with CLN since they are
cleaning tapes.
  Cleaning Prefix = "CLN"
  Storage = TL2000
  # Get tapes from scratch pool and return them to the scratch
pool when they are purged.
  Scratch Pool = Scratch
  Recycle Pool = Scratch
  # The location where the copies go for offsite backups.
  Next Pool = CopyPool
}
  

In the two schedule definitions you can see that the incremental
backups are set at priority 11, then the admin job to eject the tape
is at priority 12, and then the VirtualFull is at priority 13.
This all makes sure that they happen in the correct sequence.
I'd imagine that for this to work you shouldn't modify the settings
that lets jobs with different priorities run at the same time.

In the last pool you can see a hint of me using Copy jobs as well.
I haven't included the entries for these, but after the VirtualFull
backups are complete, I produce Copies of them for off-site
rotation.
There were a few tricks related to them as well.
Mainly to do with the job not being allowed to run a SQL query to
gather the list of latest full backups at the time it is scheduled
as opposed to when it can run.
I solved this with an admin job to kick off the Copy job rather than
scheduling it directly.

Regards,
--
Jim Barber
DDI Health

  


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] copy job scheduled immediately after backup job

2010-08-23 Thread Jim Barber

On 23/08/2010 7:39 PM, James Harper wrote:
> If I schedule a copy job immediately after a backup job (eg like 1
> second after), when does the selection actually get done? I want the
> copy to be of the job that is running now, but I think that the
> selection happens when the job is queued not when it is ready to run so
> it would not see the job that is running now... does that sound right?
>
> Thanks
>
> James

Yes, I also found that the selection criteria happens at scheduled time rather 
execution time.
I used an Admin job to get around it.
In my case I have the following two jobs to handle my copy job:

Job {
   Name = "ScheduleOffsite"
   JobDefs = "DefaultJob"
   Type = Admin
   Schedule = "OffsiteBackupSchedule"
   RunAfterJob = "/bin/sh -c \"/bin/echo 'run job=OffsiteBackup yes' \| 
/usr/bin/bconsole\""
}

Job {
   Name = "OffsiteBackup"
   JobDefs = "DefaultJob"
   Type = Copy
   Level = Full
   # Uses the 'Next Pool' definition from FullPool for where to write the 
copies to.
   Pool = FullPool
   # Use SQL to select the most recent (successful) Full backup for each job 
written to the FullPool pool.
   Selection Type = SQLQuery
   Selection Pattern = "SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level = 
'F' and Job.Type = 'B' and Job.JobStatus = 'T' and Pool.Name = 'FullPool' and 
Job.PoolId = Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;"
   Allow Duplicate Jobs = yes
   Allow Higher Duplicates = no
}

The Admin job is the one I put in the schedule behind a Virtual Full backup job 
that I run at the end of each week.
That way the "OffsiteBackup" job doesn't get a chance to run until it is ready 
to execute.
Then the SQL selection will execute at the correct time, once the most recent 
Virtual Full backups are complete.

Regards,

--
Jim Barber
DDI Health

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Crashing storage director. Need help getting trace.

2009-12-14 Thread Jim Barber
Jim Barber wrote:
> 
> Thanks Martin.
> 
> I've compiled and installed version 3.1.6 from a git pull I did on 10th Dec.
> I'm not sure if this new version will crash or not.
> But I've manually attached a gdb session to it just in case it does.
> 
> Thanks.

I'm not having much luck with this.
When I attached to the process with gdb it seems to interfere with it.
It's like to stops running.
It no longer responds to status commands etc.

I'm not familiar enough with gdb to resolve it.
I tried the 'c'ontinue command just in case attaching stops the process.
But it doesn't make any difference.

Regards,

--
Jim Barber
DDI Health

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Crashing storage director. Need help getting trace.

2009-12-13 Thread Jim Barber
Martin Simmons wrote:
> 
> Try doing it interactively by attaching gdb to the bacula-sd process before it
> crashes (run gdb /path/to/bacula-sd and then use gdb's attach command).  Then
> use the commands in btraceback.gdb when it crashes.
> 
> __Martin

Thanks Martin.

I've compiled and installed version 3.1.6 from a git pull I did on 10th Dec.
I'm not sure if this new version will crash or not.
But I've manually attached a gdb session to it just in case it does.

Thanks.

--
Jim Barber
DDI Health

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Crashing storage director. Need help getting trace.

2009-12-06 Thread Jim Barber
Hi all.

I have a problem where every weekend (or more frequently) my storage daemon 
crashes.
The crash is random, but is happening either while running VirtualFull jobs or 
Copy jobs.
So far it hasn't crashed during regular incremental backups.

I am running version 3.0.3 of the Bacula software.

First of all I tried adding a '-d 200' to the arguments that start bacula-sd.
This produced a lot of messages, but nothing unusual that I can see prior to 
the crash.
The last few lines in this log look like so:

vc-sd: mac.c:241-468 before write JobId=468 FI=363302 SessId=1 Strm=MD5 
len=16
vc-sd: mac.c:241-468 before write JobId=468 FI=363303 SessId=1 
Strm=UATTR len=104
vc-sd: mac.c:241-468 before write JobId=468 FI=363304 SessId=1 
Strm=UATTR len=122
vc-sd: mac.c:241-468 before write JobId=468 FI=363305 SessId=1 
Strm=UATTR len=77
vc-sd: mac.c:241-468 before write JobId=468 FI=363305 SessId=1 
Strm=DATA len=4496
vc-sd: mac.c:241-468 before write JobId=468 FI=363305 SessId=1 Strm=MD5 
len=16

So next I have been trying to get the btraceback program running.

I am using Debian packages (self built based on the 3.0.2 Debian sources).
These run the storage daemon under the bacula:tape user:group.
So I modified the btraceback program to use sudo to run gdb.
I also configured sudo to allow the bacula user to do so without being prompted 
for a password.
I then modified the Debian sources so that packages with debugging symbols are 
produced.

If I become the bacula user and run a test like so:

/usr/sbin/btraceback /usr/sbin/bacula-sd $PID

Where: $PID = the process ID of the bacula-sd process,
then I get an email showing debugging information.
So as far as I can tell the btraceback program should be working.

I had another crash of the storage daemon after making the changes and no email 
was sent.
Nor was a bacula-sd.9103.traceback file produced.
So I can't send any useful information to try and track down why the storage 
daemon is so unstable.

It was also unstable when using the 3.0.2 Debian package as well so I don't 
think it is my rebuild that is causing the issue.
Although I feel 3.0.3 is more stable than 3.0.2 was, I still can't get a 
complete weeks cycle working without a crash.

The /etc/init.d/bacula-sd script defines the PATH to be, 
PATH=/sbin:/bin:/usr/sbin:/usr/bin
So /usr/sbin is in the PATH and so I'd imagine the program should be able to 
find the traceback program.

Any ideas how I can get some useful information from the crash?

-- 
--
Jim Barber
DDI Health

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bit to bit

2009-11-18 Thread Jim Barber
Gilberto Nunes wrote:
> Hi buddies
> 
> I have a doubt.
> 
> A new client ask me if bacula do backup bit to bit or not...
> 
> I don't know what exactly this mean, but whatever: bacula do this or not??
> 
> Thanks

Hi.

Perhaps they were asking if Bacula can back up at the device level?
The documentation for the FileSet Resource states that if you explicitly
specify a block device such as /dev/hda1 in a FileSet then Bacula as of
version 1.28 will assume it is a raw partition to be backed up.
It also recommends that if you use this, that you specify sparse=yes
as well to only store the data in the partition, rather than the whole
partition byte for byte.

Example from the manual:

 Include {
   Options { signature=MD5; sparse=yes }
   File = /dev/hd6
 }

Regards,

--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] PATCH: Allow Copy Jobs to work within one tape library.

2009-11-18 Thread Jim Barber
Arno Lehmann wrote:

 >> Below is a patch to migrate.c to do the same thing as vbackup.c does.
 >> Is this a feasible patch?

 > Looks like it - if you tested it, and it works correctly. Don't forget
 > to test with only one drive available, in that case the job should
 > fail with a reasonable error message... and it might be better to
 > patch against the most current git version.

Thanks Arno.

At the moment I am using Debian packages.
If I can get the Debian patches to apply okay to the git version of Bacula I 
can give it a try.
Failing that though I might have to stick with my 3.0.3 which is a download 
from the Bacula site with the Debian patches applied.

Assuming I can't get the git version to compile as a Debian package, I could 
test my version with various scenarios.
Then download the git source and look at migrate.c, make my changes and 
generate the patch from that.
If it works in 3.0.3 then I guess it is likely to work in the git version as 
well...

I'll test various scenarios over the next week or two as I get time and if all 
works for me I'll re-submit to bacula-devel with my findings.

Thanks.

--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature Request: Allow schedule to override Next Pool.

2009-11-18 Thread Jim Barber
Thanks Arno.

Is this better?
If so I'll clean out all other text except the feature request and submit it to 
bacula-devel

--

Item ?: Allow Schedule Resource to override 'Next Pool'

Date:   18 November 2009

Origin: Jim Barber. jim.bar...@ddihealth.com

Status: New request

What:   Allow the Schedule resource to define a NextPool= statement
to override the NextPool statement of the pool defined in the job.

Why:I have an incremental pool that each week gets consolidated into a
full pool via a VirtualFull job. The 'Next Pool' directive of the
incremental pool defines the location of the full pool.

The following week, the next VirtualFull backup will run. It will
read the previous full backups and incremental backups since then,
to create new full backups. It is important that the VirtualFull
backup does not try to write to the same tape that the previous
weeks full backup wrote to and left in Append status. Otherwise you
could end up with the one tape trying to be read and written and
dead-lock.

At the moment I have a hack to get around this. An Admin job calls
an external command that runs a SQL update to find any tapes in the
full pool with an APPEND status and change it to USED. This runs
after the full backups have been done.

Instead I'd like to create two full pools. One for even weeks and
one for odd weeks of the year. That way, even week virtual full
backups could consolidate odd week virtual full backups with the
latest incremental backups. And the odd week virtual full backups
could consolidate the even week full backups with the latest
incremental backups.

The trouble is that the Incremental pool can only define one Next
Pool. I can't have it toggle the Next Pool directive from odd to
even, week to week. Unless I could override it from the schedule.

Doing that would mean I could ditch my SQL hack to manipulate the
tape status. It will also be less wasteful of tapes, since I won't
have partially filled USED tapes throughout my library.

There are possibly many uses for such an override that I haven't
    thought about.

Regards,

--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature Request: Allow schedule to override Next Pool.

2009-11-17 Thread Jim Barber
Hi.

When defining backup strategies, I've wanted to be able to define the 'Next 
Pool' in the Schedule to override the value defined against a pool.

An example of one usage of such a feature follows:

I have an incremental pool that each week gets consolidated into a full pool 
via a VirtualFull job.
The 'Next Pool' directive of the incremental pool defines the location of the 
full pool.

The following week, the next VirtualFull backup will run.
It will read the previous full backups and incremental backups since then, to 
create new full backups.
It is important that the VirtualFull backup does not try to write to the same 
tape that the previous weeks full backup wrote to and left in Append status.
Otherwise you could end up with the one tape trying to be read and written and 
dead-lock.

At the moment I have a hack to get around this.
An admin job calls an external command that runs a SQL update to find any tapes 
in the full pool with an APPEND status and change it to USED.
This runs after the full backups have been done.

Instead I'd like to create two full pools.
One for even weeks and one for odd weeks of the year.
That way, even week virtual full backups could consolidate odd week virtual 
full backups with the latest incremental backups.
And the odd week virtual full backups could consolidate the even week full 
backups with the latest incremental backups.

The trouble is that the Incremental pool can only define one Next Pool.
I can't have it toggle the Next Pool directive from odd to even, week to week.
Unless I could override it from the schedule.

Doing that would mean I could ditch my SQL hack to manipulate the tape status.
It will also be less wasteful of tapes, since I won't have partially filled 
USED tapes throughout my library.

Regards,

-- 
--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] PATCH: Allow Copy Jobs to work within one tape library.

2009-11-17 Thread Jim Barber
Hi.

A while ago I tried to set up a backup strategy where I defined three pools.
An incremental pool; a full backup pool; and a copy pool.

The idea was to run incremental backups forever (except for the first one that 
would be promoted to a full).
Then at the end each week consolidate the incremental backups into a full 
backup using a VirtualFull job.
Then take a copy of the full backup for off-site storage.

When using a tape library, I could achieve incremental and virtual full backups 
okay.
But I could not run the Copy job because it refused to run, complaining that 
the read storage is the same as the write storage.

I looked at the code for migrate.c and compared it to vbackup.c since both have 
similar concepts.
I wanted to see why the virtual backup works and the copy won't.
I found identical code in both, except in the vbackup.c the particular check 
that fails for migrate.c has been wrapped in #ifdef to remove it.
Also a FIXME comment is there saying that instead it should just verify that 
the pools are different.

Below is a patch to migrate.c to do the same thing as vbackup.c does.
Is this a feasible patch?
Would there be any chance of this working its way into the official Bacula 
source? Or will it cause problems?

--- bacula-3.0.3.orig/src/dird/migrate.c
+++ bacula-3.0.3/src/dird/migrate.c
@@ -350,11 +350,14 @@
 Dmsg2(dbglevel, "Read store=%s, write store=%s\n",
((STORE *)jcr->rstorage->first())->name(),
((STORE *)jcr->wstorage->first())->name());
+   /* ***FIXME***  we really should simply verify that the pools are different 
*/
+#ifdef xxx
 if (((STORE *)jcr->rstorage->first())->name() == ((STORE 
*)jcr->wstorage->first())->name()) {
Jmsg(jcr, M_FATAL, 0, _("Read storage \"%s\" same as write storage.\n"),
 ((STORE *)jcr->rstorage->first())->name());
return false;
 }
+#endif
 if (!start_storage_daemon_job(jcr, jcr->rstorage, jcr->wstorage, 
/*send_bsr*/true)) {
return false;
 }

At the moment I have a really badly hacked up configuration to try and achieve 
what I want by using each drive in the library independently.
It is complicated and messy with lots of work arounds for various scenarios.
If the above patch is okay then things become much simpler.

Regards,

-- 
--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy jobs failure: Read storage same as write.

2009-11-08 Thread Jim Barber
I was wondering...
Could I achieve what I want if I were to change the pools to refer to the 
individual drives in the auto-changer tape library instead?

eg.

 # Default pool definition used by incremental backups.
 # We wish to be able to restore files for any day for at least 2 weeks, so 
set the retention to 13 days.
 Pool{
   Name = Default
   Volume Retention = 13 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning 
tapes.
   Cleaning Prefix = "CLN"
   Storage = Drive-0
   # Get tapes from scratch pool and return them to the scratch pool when 
they are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the VirtualFull backups will be written to.
   Next Pool = FullPool
 }

 # Pool used by Full and VirtualFull backups.
 # We only need at least the last 2 weeks, so set the retention to 13 days.
 Pool{
   Name = FullPool
   Volume Retention = 13 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning 
tapes.
   Cleaning Prefix = "CLN"
   Storage = Drive-1
   # Get tapes from scratch pool and return them to the scratch pool when 
they are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the copies go for offsite backups.
   Next Pool = CopyPool
 }

 # Pool used by Copy jobs for offsite tapes.
 # These only need to be valid for a week before being eligible to be 
overwritten.
 Pool{
   Name = CopyPool
   Volume Retention = 6 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning 
tapes.
   Cleaning Prefix = "CLN"
   Storage = Drive-0
   # Get tapes from scratch pool and return them to the scratch pool when 
they are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
 }

Note that the "Storage = TL2000" has been changed to refer to "Drive-0" or 
"Drive-1" respectively.
This alternates the drives so that "Pool" and "Next Pool" never refer to the 
same tape drive.

If I were to do the above would my storage daemon configuration need to change?
ie. Would I need to specify the "Changer Command" and "Changer Device" in the 
"Device" declarations instead of (or as well as) in the "Autochanger" 
declaration?
The changer and device entries in my current bacula-sd.conf file is as follows:

 # An autochanger device with two drives
 #
 Autochanger {
   Name = TL2000
   Device = Drive-0
   Device = Drive-1
   Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
   Changer Device = /dev/sg5
 }

 Device {
   Name = Drive-0
   Drive Index = 0
   Media Type = LTO
   Archive Device = /dev/nst0
   RemovableMedia = yes;
   RandomAccess = no;
   Maximum File Size = 4GB
   AutoChanger = yes
   Maximum Job Spool Size = 20g
   Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
 }

 Device {
   Name = Drive-1
   Drive Index = 1
   Media Type = LTO
   Archive Device = /dev/nst1
   RemovableMedia = yes;
   RandomAccess = no;
   Maximum File Size = 4GB
   AutoChanger = yes
   Maximum Job Spool Size = 20g
   Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
 }

Finally, if I were to change the Storage definitions in my pools, would I need 
to do any sort of updates to the volumes to reflect the change?
 From what I can tell, drive information isn't stored against the media in the 
database, only the pool, so I think I might be safe, but just want to double 
check.

Thanks,

--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy jobs failure: Read storage same as write. Was: VirtualFull backup. No previous Jobs found.

2009-11-08 Thread Jim Barber
aemon messages (no job).
Messages{
   Name = Daemon
   mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" 
-s \"Bacula daemon message\" %r"
   mail = r...@localhost = all, !skipped
   console = all, !skipped, !saved
   append = "/var/lib/bacula/log" = all, !skipped
}

#===
# Pools

# Default pool definition used by incremental backups.
# We wish to be able to restore files for any day for at least 2 weeks, so set 
the retention to 13 days.
Pool{
   Name = Default
   Volume Retention = 13 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning tapes.
   Cleaning Prefix = "CLN"
   # There are only 22 usable tapes in the library after adding a cleaning tape.
   # Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
   Storage = TL2000
   # Get tapes from scratch pool and return them to the scratch pool when they 
are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the VirtualFull backups will be written to.
   Next Pool = FullPool
}

# Pool used by Full and VirtualFull backups.
# We only need at least the last 2 weeks, so set the retention to 13 days.
Pool{
   Name = FullPool
   Volume Retention = 13 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning tapes.
   Cleaning Prefix = "CLN"
   # There are only 22 usable tapes in the library after adding a cleaning tape.
   # Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
   Storage = TL2000
   # Get tapes from scratch pool and return them to the scratch pool when they 
are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the copies go for offsite backups.
   Next Pool = CopyPool
}

# Pool used by Copy jobs for offsite tapes.
# These only need to be valid for a week before being eligible to be 
overwritten.
Pool{
   Name = CopyPool
   Volume Retention = 6 days
   Pool Type = Backup
   # Automatically prune and recycle volumes.
   AutoPrune = yes
   Recycle = yes
   # Do not use tapes whos labels start with CLN since they are cleaning tapes.
   Cleaning Prefix = "CLN"
   # There are only 22 usable tapes in the library after adding a cleaning tape.
   # Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
   Storage = TL2000
   # Get tapes from scratch pool and return them to the scratch pool when they 
are purged.
   Scratch Pool = Scratch
   Recycle Pool = Scratch
}

# Scratch pool definition
Pool{
   Name = Scratch
   Pool Type = Backup
   Recycle Pool = Scratch
}

#===
# Restricted console used by tray-monitor to get the status of the director

Console{
   Name = vc-mon
   Password = ""
   CommandACL = status, .status
}


Thanks,

--
Jim Barber
DDI Health

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull backup. No previous Jobs found.

2009-11-02 Thread Jim Barber
I think I just stumbled on to something.
I was under the (false) assumption that the VirtualFull would just look at 
previous backup jobs and consolidate them.
But instead of using 'FileSet = "None"', I tried 'FileSet = "LinuxSet"' and it 
started working.

So I guess that means I need to create a VirtualFull job for each separate 
client and their appropriate FileSet as well then?

Will this also be the case for the Copy jobs for the Offsite tapes? (I guess 
so...)

Finally, I suspect that I may need to move the VirtualFull jobs from the 
FullPool back into the Default Pool.
So that the next set of incremental backups will work from them.
Is that the case?
Or will Bacula be aware that there are full backups in a separate pool and 
reference them accordingly?

If I do have to move them, a better solution may be to have the Default Pool 
have "Next Pool" pointing to itself.
But then, would I be able to override the "Next Pool" directive somehow for the 
Copy job?
Since I'd want off-site tapes to be written to their own tapes separate from 
the other on-site backups.

Regards,

--
Jim Barber
DDI Health


Jim Barber wrote:

Hi.

I have just started using Bacula for the first time.
I am using version 3.0.2 as packaged for Debian testing.

  vc-dir Version: 3.0.2 (18 July 2009) i486-pc-linux-gnu debian squeeze/sid

The director and storage daemon are installed on to the same host with a Dell 
TL2000 tape library attached.
This tape library has one autochanger servicing two drives.

I am backing up a mix of Linux and Windows clients, that after doing a full 
backup of each, fills a little over 5 LTO-3 tapes.
I only have about 50 GB of free disk space, so I can't use a backup to disk and 
then to tape strategy, so I am using disk as spool.
I've assigned 20 GB spool space to each drive in the tape library.

The plan I am hoping to achieve is as follows:

1. Incremental backups of all clients nightly, Monday to Friday.
2. Produce a VirtualFull backup after the incremental backup on Friday to 
consolidate the week's incremental backups into a full backup.
3. Copy the newly created full backup to another set of tapes that can be taken 
off-site.

My reasoning is that the clients will only ever need to do one full backup 
(which takes over a day for all of them to complete).
  From then on, they will only need to do incremental backups which regularly 
get consolidated on the backup server into full backups.
This should keep backup times short at all times and minimise the amount of 
data crossing the network.

I have separate pools defined for each of these steps all referring the to same 
tape library.
I also have a scratch pool where I've placed all of my tapes so that the above 
pools can pick from it as required.

So far step 1 is working out fine, but I'm having issues with step 2.
I haven't tried step 3 yet since it depends on a working step 2.

I've run incremental jobs a number of times and now want to create the 
VirtualFull backup.
If I start the VirtualFull job manually it shows:

  *run
  Automatically selected Catalog: MyCatalog
  Using Catalog "MyCatalog"
  A job name must be specified.
  The defined Job resources are:
   1: BackupBuildatron
   2: BackupDavros
   3: BackupDc1
   4: BackupDc2
   5: BackupFreddy
   6: BackupMail
   7: BackupSpirateam
   8: BackupShadow
   9: BackupVc
  10: BackupWiki
  11: BackupWikiHcn
  12: FullBackup
  13: CatalogBackup
  14: OffsiteBackup
  15: RestoreFiles
  Select Job resource (1-15): 12
  Run Backup job
  JobName:  FullBackup
  Level:VirtualFull
  Client:   vc-fd
  FileSet:  None
  Pool: Default (From Job resource)
  Storage:  TL2000 (From Storage from Pool's NextPool resource)
  When: 2009-11-03 09:40:01
  Priority: 11
  OK to run? (yes/mod/no): yes
  Job queued. JobId=67

The job immediately terminates and the messages are as follows:

  03-Nov 09:40 vc-dir JobId 67: Start Virtual Backup JobId 67, 
Job=FullBackup.2009-11-03_09.40.04_02
  03-Nov 09:40 vc-dir JobId 67: Fatal error: No previous Jobs found.
  03-Nov 09:40 vc-dir JobId 67: Error: Bacula vc-dir 3.0.2 (18Jul09): 
01-Jan-1970 08:00:00
Build OS:   i486-pc-linux-gnu debian squeeze/sid
JobId:  67
Job:FullBackup.2009-11-03_09.40.04_02
Backup Level:   Virtual Full
Client: "vc-fd" 3.0.2 (18Jul09) 
i486-pc-linux-gnu,debian,squeeze/sid
FileSet:"None" 2009-11-02 20:50:14
Pool:   "FullPool" (From Job Pool's NextPool resource)
Catalog:"MyCatalog" (F

[Bacula-users] VirtualFull backup. No previous Jobs found.

2009-11-02 Thread Jim Barber
rage{
   Name = TL2000
   Address = vc.ddihealth.com
   Password = ""
   Device = TL2000
   Media Type = LTO
   Autochanger = yes
   # Allow two jobs to this tape library so that we can utilise both drives.
   Maximum Concurrent Jobs = 2
 }

 FileSet{
   Name = "LinuxSet"
   Include {
  @/etc/bacula/fileset-linux-exclude.conf
  File = /
   }
   Exclude {
 File = /dev
 File = /lib/init/rw
 File = /proc
 File = /sys
 File = /var/lib/bacula
 File = /.journal
 File = /.fsck
   }
 }

 ### Windows FileSets removed.

 # Fake FileSet definition for jobs that don't use the FileSet field, but 
still need it declared.
 FileSet{
   Name = "None"
   Include {
 Options {
   signature = MD5
 }
   }
 }

 ### Catalog and Messages definitions removed...

 # Incremental backups go here.
 Pool{
   Name = Default
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 13 days
   Cleaning Prefix = "CLN"
   Maximum Volumes = 22
   Storage = TL2000
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the virtual full backups will go.
   Next Pool = FullPool
 }

 # VirtualFull and Catalog backups go here.
 Pool{
   Name = FullPool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 35 days
   Cleaning Prefix = "CLN"
   Maximum Volumes = 22
   Storage = TL2000
   Scratch Pool = Scratch
   Recycle Pool = Scratch
   # The location where the copies go for offsite backups.
   Next Pool = CopyPool
 }

 # Offsite tapes get written here.
 Pool{
   Name = CopyPool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 6 days
   Cleaning Prefix = "CLN"
   Maximum Volumes = 22
   Storage = TL2000
   Scratch Pool = Scratch
   Recycle Pool = Scratch
 }

 Pool{
   Name = Scratch
   Pool Type = Backup
   Recycle Pool = Scratch
 }

 ### Console directive removed...

Can anyone see why the VirtualFull backup can't find the previous backup jobs?

The "FullBackup" job is referring to the "Default" pool where the incremental 
backups have been going.
The first ever incremental backups to the "Default" pool created Full backups 
as expected.
The "Default" pool has "Next Pool" defined to be "FullPool".
I figured it should be able to find all the backups on the "Default" pool.

I've also tried just using a single "Default" pool with "Next Pool" pointing to 
itself.
But I think that may possibly violate the Copy job I want to achieve later?
However I had exactly the same result with the VirtualFull backup trying this 
method as well.

Thanks,

-- 
--
Jim Barber
DDI Health

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users