[Bacula-users] Cleaning LOG.MYD

2012-01-09 Thread Eric Pratt
I have a 17GB LOG.MYD file in my MySQL's bacula database.  This is because
a previous configuration our bacula instance was using MySQL as a log
destination.  We no longer want that and have set our logs to go to text
files that can be rotated.

As I understand it, Bacula will purge these logs when their associated job
entries are purged.  We are using Bacula to perform archives as well as
backups so we are keeping data around for a very long time.  I would like
to continue to keep the data in volumes with the ability to restore it but
disregard log entries about it.

So I'm left with a 17GB table.  Of course, our MySQL backups are backing
this table up with regularity and I'd like to stop that and free up some
disk space in the process.  This file has not been modified in a month so
I'm confident nothing else is writing to it.  Is it safe to clean this
table out?  If so, what is the safest or easiest way to clean that table
out? (bonus points if safest and easiest are the same!)  What are the
repercussions of wantonly wiping out all entries in this table?

Eric
--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not to use Bacula

2011-12-05 Thread Eric Pratt
On Mon, Dec 5, 2011 at 11:36 AM, Pablo Marques pmarq...@miamilinux.netwrote:


  Thanks you Jesse for the feedback.
 
  Regarding the disaster recovery, I have a suggestion for the bacula
  team:
 
  Why not make the director write the bacula config files and any
  relevant bsr files at the beginning of each tape? The space wasted on
  the tape to save these file would be very small.

 Well, the first problem here is that the Director would have to know how
 much space it was going to need for BSR files.  Of course, it could
 pre-allocate a fixed-size block of, say, 1MB for BSR files.

 Agree, 1 MB is basically nothing on a tape and it can accommodate easily a
 huge amount of bsr files.
 My /etc/bacula is 88k uncompressed.

 The second problem, it seems to me, is that this would break
 compatibility with all older Bacula volumes and installations.

 not necessarily if you make this information at the beginning of the tape
 look like a volume file.
 It will be ignored by old directors because it will look the same as a
 failed job that took space on the tape.


 Pablo


Or you could just decide that backward compatibility of that type is just
not that important.  For instance: versions x.0.0 and later use this
format.  Tapes written in this format are not accessible to older
versions.  It's OK to do that.  It's also OK to have the option of writing
in the older format from the newer directors.  This will give you time to
bring all your directors up to date before switching to the new format.
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not touse Bacula

2011-12-05 Thread Eric Pratt
On Mon, Dec 5, 2011 at 4:47 PM, Jose Ildefonso Camargo Tolosa 
ildefonso.cama...@gmail.com wrote:

 On Mon, Dec 5, 2011 at 6:55 PM, James Harper
 james.har...@bendigoit.com.au wrote:
  Regarding the disaster recovery, I have a suggestion for the bacula
  team:
 
  Why not make the director write the bacula config files and any
  relevant bsr
  files at the beginning of each tape?
  The space wasted on the tape to save these file would be very small.
 
 
  A script to email the bsr file to a gmail/Hotmail/whatever account would
  suffice. It's not like the file contains any sensitive information.

 Well, yeah, you could also rsync those files to any of the available
 online storage solutions (some of them free) but, I think this
 breaks the point of being able to recover from just the tape, or for
 instance: any storage (disks, for example)...

 Now, I wonder if any backup solution out-there allows you to do
 this... ie: recover from just backup media, you *always* need to get
 OS running again... so, there is no such thing as bare metal
 recovery. Unless you create something like an installer image that
 uses the backup to restore the machine... mmm maybe a
 life-bacula... that automatically rebuild the catalog from
 volumes... uh... is that possible? (automatically and completely
 rebuild catalog from volumes?).

 Ildefonso Camargo


How about a special DR backup that dumps the database to the DR volume
along with the server's config and each defined client's config?  The
volume would be a simple tarball you could store on a thumbdrive, tape,
cloud, whatever.  When you do your bare metal restore of your systems, you
only need to retrieve that tarball.   With it you should be able to
completely recreate your catalog and clients from this sucker with a
drbacula tool.  As long as I don't have to cough for it, that should be a
pretty good basis for bare metal restores of servers and clients from a
live distro.

Eric
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Need help canceling a job

2011-10-19 Thread Eric Pratt
I tried restoring a file but the restore wanted to mount an offsite
tape.  I took another path to get the file I need so no longer need
this restore done.  However, it's still waiting on a mount so I cannot
cancel it.  Here is what happens.

*status all
--snip--
JobId 25604 Job RestoreFiles.2011-10-19_11.53.27_04 is running.
--snip--
*cancel jobid=25604
JobId 25604 is not running. Use Job name to cancel inactive jobs.
*cancel job=RestoreFiles
Warning Job RestoreFiles is not running. Continuing anyway ...
JobId 0, Job RestoreFiles marked to be canceled.
 *status all
--snip--
JobId 25604 Job RestoreFiles.2011-10-19_11.53.27_04 is running.
--snip--

I have another restore queued up that I do need this one is keeping
the one I need from proceeding.  How can I cancel this restore job?

Thanks!

Eric

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need help canceling a job

2011-10-19 Thread Eric Pratt
On Wed, Oct 19, 2011 at 2:17 PM, Ben Walton bwal...@artsci.utoronto.ca wrote:

 I'm not positive, but I believe that to reference this job by name,
 you'd want: RestoreFiles.2011-10-19_11.53.27_04

 Not just RestoreFiles.

 Also, if you just say cancel, doesn't it prompt you with a menu driven
 choice?

 Thanks
 -Ben
 --
 Ben Walton
 Systems Programmer - CHASS
 University of Toronto
 C:416.407.5610 | W:416.978.4302

Thanks for your reply!

I've also tried using the full name as you mentioned and I get the
same exact response and the job is not canceled.  If I run cancel
without options, I get this:

*cancel
No Jobs running.

Yet, I can still see the job is listed as running in the status output.

I just restarted bacula-sd and that did cancel the job.  But that just
seems like hitting a small nail with a 20lb sledge hammer.  It also
seems like the wrong solution give that it can affect other processes
too.

I've been looking through the documentation and it looks like it says
I need to perform a mount of the volume before the job will cancel.
That can't be right!  The volume is offsite and not available right
now which is why it is waiting on the mount in the first place.  While
I can go retrieve the offsite volume and perform the mount, that's a
lot of legwork since I'm not on the same side of the city where my
datacenter lives.

Do I really have to mount the volume to be able to cancel the job?  If
so, I cannot make sense of that.  Is there another way?

Eric

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula, Offsite, and Restores

2011-09-17 Thread Eric Pratt
2011/9/17 Rodrigo Renie Braga rodrigore...@gmail.com:

 Well, in order to restore from the Copy tapes, yes, you have to purge the
 original
 tapes...

 But you don't have to purge the original tapes if you want to do a normal
 routine restore. Even if you pull out the Copy tapes from your library right
 after
 a Copy Job (which is the hole idea), your restores will work normally like
 there
 never was a Copy Job ran before...

 The original tapes will stay intact on your on-site location and you'll use
 these
 original tapes to do your restores. You will only use your copy jobs tapes
 in
 case of a disaster...


Yes, but what I am looking for is the easiest method for my team to
perform restores in case I'm not there, even if it is a restoration
from offsite volumes.  Having them go through the process of finding
the right volumes to purge and then purging them is something I don't
think they should have to learn how to do.  It's nice that the ability
is there in Bacula, but I think it's far too complicated for a robust
solution.  The solution needs to be easy and intuitive and merely
running a second client makes it ridiculously easy and intuitive.  I
think it's the right way to go for now.

Eric

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to use retention to limit disk usage

2011-09-17 Thread Eric Pratt
On Fri, Sep 16, 2011 at 7:20 PM, Eric Sisolak haldir.j...@gmail.com wrote:
 Hello,

 I am looking for a way to limit the amount of space taken up by
 backups by truncating/deleting volumes whose files and jobs have been
 pruned according to retention periods with Bacula 5.0.3.

 ActionOnPurge=Truncate looks like it could do what I want, but it
 seemed like there many people had issues with it.

 Has anyone else implemented this? How did you do it?

 --Eric

I am doing something like this, but not using ActionOnPurge.

I'm using vchanger as a VTL and initializing X number of volumes.
Lets say X = 80.  I tell Bacula to limit the volume size to 5GB
meaning I'm now only using 400GB of disk space for backups.  I also
tell Bacula to use a volume retention period of one day and to recycle
the oldest volume.  Bacula will not use these settings unless it runs
out of volumes so your data will be retained in older volumes until
you run out of space in the pool.  But, once you've filled up your
80th volume in this scenario, it will look at the volume with the
oldest last written timedate stamp in the pool and see if the volume
is past its retention period.  If it is, it will purge the jobs and
files and re-use the volume.  The net result is that Bacula is now set
to use 400GB of storage space and never exceed it.  It will
automatically cannibalize the oldest volumes and purge records
associated with those volumes as needed.

If you are a little queasy about a 1 day volume retention, you can set
this to something higher like one month to insure you always have at
least one month's worth of backups.  Just be aware that any jobs
attempting to use storage when all 400GB are allocated will hang
waiting for volumes if there are no volumes past their volume
retention period.  You must make sure that you have enough storage to
handle the actual retention period you want.

This also makes the names of the volumes generic, so your volume names
will no longer be indicative of their contents.  I find that using
volume names tied to their jobs and pools or whatnot to be useless for
me.  So using generic names for the volumes results in no loss, but I
gain the ease of use of vchanger.

Here are the relevant config sections I'm using to accomplish this:

--
bacula-sd.conf
--

Device {
  Name = PrimaryVTLDevice
  DriveIndex = 0
  Autochanger = yes
  Media Type = File
  Device Type = File
  Archive Device = /var/lib/bacula/PrimaryVTL/0/drive0
  Random Access = yes
  RemovableMedia = yes
  LabelMedia = yes
}

Autochanger {
  Name = PrimaryVTLAutoChanger
  Device = PrimaryVTLDevice
  ChangerDevice = /etc/bacula/PrimaryVTL.conf
  ChangerCommand = /opt/bacula/bin/vchanger %c %o %S %a %d
}

---
bacula-dir.conf
---

Storage {
  Name = PrimaryVTLStorage
  Address = enter.your.bacula-sd.hostname.here
  SDPort = 9103
  Password = 
  Device = PrimaryVTLAutoChanger
  Media Type = File
  Autochanger = yes
}

Pool {
  Name = PrimaryVTLPool
  PoolType = Backup
  Storage = PrimaryVTLStorage
  AutoPrune = yes
  VolumeRetention = 1 day
  MaximumVolumes = 80
  MaximumVolumeBytes = 5368709120
  RecycleOldestVolume = yes
}

---
PrimaryVTL.conf
---

changer_name = PrimaryVTL
work_dir = /var/lib/bacula/PrimaryVTL
virtual_drives = 1
slots_per_magazine = 80
magazine_bays = 1
magazine = /var/backups/PrimaryVTL

--

Hope that helps!

Eric

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula, Offsite, and Restores

2011-09-16 Thread Eric Pratt
Thank you for your feedback, Rodrigo.  I looked up the copy job
information as you suggested.  From what I can tell, you have to purge
the original job before you can use a copy.  This means to me that to
do a restore, we have to:

1) identify all the jobs associated with all the files being restored.
2) purge those jobs from the database (which promoted their copies to
a restorable state)
3) perform the restore

Since I'm trying to make it as easy as possible for my coworkers to
restore, having them go through the process of identifying the jobs
for a given backup and purging them seems a bit much.  I've decided to
stick with the dual client method.  I've implemented it and tested it.
 It is working beautifully!

We're running CentOS and the Redhat/CentOS init script for bacula-fd
as supplied will not allow multiple clients.  I had to modify it to
allow that.  In fact, I will look into submitting my updated init
script to the package maintainer.  The init script doesn't use the PID
files generated by bacula to manage the process and it should, even if
you just want to run a single client.  Other than that, the only
drawback is that when we do perform offsite backups, we are
essentially moving that data over the network twice (once for the
normal backups and once for the offsite backups.)  Since we're not
moving a lot of data, this is a non-issue.

Thanks again!

Eric

On Sun, Sep 11, 2011 at 5:55 PM, Rodrigo Renie Braga
rodrigore...@gmail.com wrote:
 Well, I'm really starting to figure this bacula feature yet, but I'd
 recomend taking a look at Copy Jobs.

 The ideia would be only running your normal Full/Diff/Inc Backups and then,
 weekly, create a copy of them on your offsite storage. When restoring, it
 will require only your normal Full/Diff/Inc backups. Only when these were
 unavailable (like in a disaster!) that Bacula would automatically require
 the Offsite Storage.

 I lost all my documentation and links that I had about Copy Jobs (along with
 my 1TB HD), but I'm once again taking a look at this feature to implement it
 on a different company and as soon that I find these documentations again I
 will send them to you...

 2011/9/8 Eric Pratt eric.pr...@etouchpoint.com

 I'm using Bacula with USB drives to perform offsite backups.  I'm
 trying to create the simplest process possible so in the event I'm
 unavailable, my coworkers can perform restores with confidence without
 knowing a whole lot about bacula.

 Originally, our offsite backups were performed as just a part of the
 normal schedule for a job called JobName.  The schedule was:

  Run = Level=Full 1st sun at 23:05
  Run = Level=Differential 2nd-5th sun at 23:05
  Run = Level=Incremental mon-sat at 23:05
  Run = Level=Full sun at 10:00 Pool=OffsitePool

 Any scheduled runs that did not define a pool went to our normal VTL
 pool.  The problem I had there was that the weekly fulls at 10:00AM
 each sunday became the new baseline for all following differentials
 and incrementals.  The drive with offsite backups is cycled weekly so
 this means attempting to restore a directory that got blown away
 Tuesday meant bringing the offsite drive back before restoring.  The
 intention for our offsite backups is purely to give us some form of
 disaster recovery ability.  In this case, they're actually hindering
 us.  Worse, the more we have to bring these onsite, the less likely
 they will be safe for DR purposes.

 I want to separate the offsite backups from the normal backups so I
 removed the last line of the schedule above.  Then I created a new job
 called JobNameOffsite and a new schedule called
 JobNameOffsiteSchedule that looks like this (the pool is defined in
 the job to be OffsitePool):

  Run = Level=Full sun at 10:00

 Now, the differentials and incrementals appear to be looking at the
 full from the 1st sunday of the month and not the weekly offsite fulls
 for the last full.  However, restoring most files still results in
 attempts to require the offsite backup volumes since it was the last
 job to use that fileset for a specific path on a given client.  Since
 the offsite disk is not available on any given week, this can pose
 problems.  I can of course work around this and pull files from
 specific job IDs but I am trying to keep this simple for non-admins to
 perform restores in my absence.

 My next idea is to configure a second client on each machine just for
 offsite backups.  If I do this, I can tell bacula to restore from
 ClientName or ClientNameOffsite.  This should provide 100% separation
 of the normal and offsite backups as well as an easy method for
 restoring data for those who are not experts with bacula.  They would
 simply choose to restore from ClientName and it's business as usual.
  Restores from ClientNameOffsite would be only in emergency
 situations.  Even then, it's as simple as bringing the disk on site
 and choosing to restore from the proper client name.

 Before I start down this path, does anyone have any

[Bacula-users] Bacula, Offsite, and Restores

2011-09-08 Thread Eric Pratt
I'm using Bacula with USB drives to perform offsite backups.  I'm
trying to create the simplest process possible so in the event I'm
unavailable, my coworkers can perform restores with confidence without
knowing a whole lot about bacula.

Originally, our offsite backups were performed as just a part of the
normal schedule for a job called JobName.  The schedule was:

  Run = Level=Full 1st sun at 23:05
  Run = Level=Differential 2nd-5th sun at 23:05
  Run = Level=Incremental mon-sat at 23:05
  Run = Level=Full sun at 10:00 Pool=OffsitePool

Any scheduled runs that did not define a pool went to our normal VTL
pool.  The problem I had there was that the weekly fulls at 10:00AM
each sunday became the new baseline for all following differentials
and incrementals.  The drive with offsite backups is cycled weekly so
this means attempting to restore a directory that got blown away
Tuesday meant bringing the offsite drive back before restoring.  The
intention for our offsite backups is purely to give us some form of
disaster recovery ability.  In this case, they're actually hindering
us.  Worse, the more we have to bring these onsite, the less likely
they will be safe for DR purposes.

I want to separate the offsite backups from the normal backups so I
removed the last line of the schedule above.  Then I created a new job
called JobNameOffsite and a new schedule called
JobNameOffsiteSchedule that looks like this (the pool is defined in
the job to be OffsitePool):

  Run = Level=Full sun at 10:00

Now, the differentials and incrementals appear to be looking at the
full from the 1st sunday of the month and not the weekly offsite fulls
for the last full.  However, restoring most files still results in
attempts to require the offsite backup volumes since it was the last
job to use that fileset for a specific path on a given client.  Since
the offsite disk is not available on any given week, this can pose
problems.  I can of course work around this and pull files from
specific job IDs but I am trying to keep this simple for non-admins to
perform restores in my absence.

My next idea is to configure a second client on each machine just for
offsite backups.  If I do this, I can tell bacula to restore from
ClientName or ClientNameOffsite.  This should provide 100% separation
of the normal and offsite backups as well as an easy method for
restoring data for those who are not experts with bacula.  They would
simply choose to restore from ClientName and it's business as usual.
 Restores from ClientNameOffsite would be only in emergency
situations.  Even then, it's as simple as bringing the disk on site
and choosing to restore from the proper client name.

Before I start down this path, does anyone have any other ideas to
keep restores simple while still achieving offsite backups within
bacula?  Is there some simple way to tag a job as offsite to provide
this level of separation without a second client on each machine?  Are
there any pitfalls with my proposed approach that I should be aware
of?

I am trying to avoid some things in this process.  The primary one is
avoiding doing anything outside of bacula.  This is tied to the
simplicity factor.  I can document the restore procedures, but I want
to avoid having people lookup volumes tied to JobIDs and performing an
rsync to move the volumes into place.  I want them to just go into
bconsole, restore, and be done with it.

Thanks, everyone!

Eric

--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] My backup schedule overlaps. Can it be fixed?

2011-09-01 Thread Eric Pratt
On Thu, Sep 1, 2011 at 11:57 AM, Dan Schaefer d...@performanceadmin.com wrote:
 In English, I want to do a full backup on the 1st and the 16th of every
 month. I also want to do a differential backup every Sunday and an
 Incremental Monday-Saturday.
 Here is my config:
         Run = Level=Full Pool=Full-Pool Developer-PC on 1,16 at 0:05
         Run = Level=Differential Pool=Diff-Pool Developer-PC sun at 0:15
         Run = Level=Incremental Pool=Inc-Pool Developer-PC mon-sat at
 0:25

 Since the Full backups are represented in numbers, the Diff and Inc
 backups always have overlap with the Full. For example, today Sept 1, a
 full backup was scheduled to run @ 00:05 and an incremental backup was
 scheduled @ 0:25. If in fact there were no changes to the files in the
 fileset, the incremental backup shouldn't have backed up anything. But
 it runs anyway.

 Is there a schedule that I can configure that will eliminate backup
 redundancy? One option is to do a diff on, say, the 8th and 22st of the
 month and the incremental on every other day not covered. I would,
 however, like to keep the differentials on Sunday.

 Thanks in advance.

 --
 Thanks,
 Dan Schaefer
 Web Developer/Systems Analyst
 Performance Administration Corp.
 ph 800-405-3148


The incremental will run, but it shouldn't back anything up if nothing
changed since the last time the job ran.  When it runs, it looks to
see if anything changed and if not, exits with OK. There is no
redundancy there.  Check the byte count of the job or list the files
backed up by the job to see if it actually moved anything.  If the
byte count is 0 and/or there are no files in it, then it's working
exactly as you expect.

However, if your full backup didn't finish before the incremental
started, then the incremental could not use the full backup for
comparison.  It will have used the last completed backup instead.  To
resolve this, make sure that Maximum Concurrent Jobs in the Job
resource to '1'.  This is the default so it is already set to '1'
unless you've defined it in the Job or a referenced JobDefs resource.
Basically, if you're not defining Maximum Concurrent Jobs anywhere,
then this second paragraph is not what's happening.

Eric

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedules, Full runs and incremental runs

2011-08-30 Thread Eric Pratt
On Tue, Aug 30, 2011 at 2:02 AM, Christoph Weber we...@mfo.de wrote:
 Hello Bacula-List,

 I've got a little problem with the understanding of schedules.

 I want Bacula to do a full backup once a month on specific dates, which
 is the monday followed on the last friday of a month, during normal
 week-days I want to have an incremental run.

 Schedule {                                 # optional
   Name = vm-backup-incremental_schedule    # required
   Run = Full jan 31 at 1:00
   Run = Full feb 28 at 1:00
   Run = Full mar 28 at 1:00
   Run = Full may 02 at 1:00
   Run = Full may 30 at 1:00
   Run = Full jun 27 at 1:00
   Run = Full aug 01 at 1:00
   Run = Full aug 29 at 1:00
   Run = Full oct 02 at 1:00
   Run = Full oct 30 at 1:00
   Run = Full nov 27 at 1:00
   Run = Full jan 01 at 1:00
   Run = Incremental mon-fri at 2:00
 }

 The problem is, that I have a full run and an incremental run scheduled
 on the same day, at 1:00 and 2:00 and the incremental run at 2:00 is not
 an increment of the 1:00 full backup, but an increment of the last
 incremental backup preceding the full backup.

 On the friday before the full tape backup with bacula, I run a VMware
 full backup which produces several big files, so I have a lot of new
 data (~1TB) which is now backuped twice with bacula, once with the full
 backup and once with the incremental backup.

 I could run the full backup one day before on sunday, but I cannot be
 sure that it'll be finished before the incremental backup on monday.

 So how can I achieve that the incremental run is an increment of the
 full backup and not of the last incremental run?

 regards,
 Christoph Weber

Try limiting the jobs that can run simultaneously.  If your
incremental is using the previous incremental as a baseline even
though it is scheduled after the full, it is probably because it is
kicking off before the full finishes and thus cannot use it as a
baseline.  If you limit the number of simultaneous jobs to 1, then the
full will finish before the incremental and will be used as the
baseline.

There are a number of ways to limit the number of jobs running or
writing to a given storage resource.  Use the one that best fits your
environment and you should be good to go.

Eric

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ghost jobs?

2011-08-24 Thread Eric Pratt
2011/8/23 Carlos André candr...@gmail.com:
 Running Jobs:
 Writing: Incremental Backup job SRV10 JobId=61090 Volume=
    pool=MON device=IBM_TL_LTO3-0 (/dev/IBM_TL_LTO3-0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 Bytes/sec=0
    FDSocket closed

 ---
 *cancel jobid=61090
 JobId 61090 is not running. Use Job name to cancel inactive jobs. ##
 BUT these mf... keep showing and f.. with my routines :/
 ---

Have you tried doing what bacula says to do there and cancel the job
by name instead of id?  If so, what does it say then?

Eric

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ignore fileset changes

2011-08-24 Thread Eric Pratt
On Wed, Aug 24, 2011 at 1:50 AM, Olle Romo oller...@gmail.com wrote:

 On Aug 23, 2011, at 8:43 AM, Adrian Reyer wrote:

 On Tue, Aug 23, 2011 at 01:38:23AM +0200, Olle Romo wrote:
 What I mean is that if I have a drive removed, run the job then
 attach
 the drive and run the job again, Bacula will do a full backup of the
 drive even if it previously had done an incremental backup on the
 same
 drive. Ideally I want it to just continue the incremental backup.

 Use 2 filesets and 2 jobs. One like you have now, but exclude the
 removable drive, The other only has the removable dribe and you only
 run it when the drive is there, e.g. checked by a pre-script.

 Regards,
       Adrian

 That will be the way to go. Problem is I have quite a few drives that
 come and go in different combinations. I still wish I could control
 that particular behavior.

 Thanks for the tip :)

 Best,
 Olle

You can.  As Adrian says you should be able to use a
ClientRunBeforeJob directive.  This will tell the client to run a
script which should be a shell script that checks for the presence of
a drive.  If it does not detect the drive, exit with exit code 1 and
the job will not run.  If it does detect the drive, exit with exit
code 0 and the backup job will run.  I missed the previous portion of
this thread, but I'm assuming this is a Linux client that is mounting
a drive.  If that's the case, then you're still functionally getting
an incremental but because the directory was empty last time (when the
drive was not mounted) but now has the entire drive's contents in it.
The incremental job is dutifully backing up all changed data from the
last incremental which happens to be all of the data on the drive now
that Bacula can see it.  Merely detecting the drive with a
ClientRunBeforeJob script will solve that problem entirely.

If you have many drives you interchange in this way, I would recommend
making your script take an argument so you only have to maintain a
single script that can check any drive you want as dictated by the
job.

If you do this I don't see why you would need 2 filesets and 2 jobs.
It should work just fine with one of each unless that is addressing
something I missed from previous emails.

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FileStorage strategy

2011-08-16 Thread Eric Pratt
On Tue, Aug 16, 2011 at 12:39 AM, Lyn Amery l...@gaiaresources.com.au wrote:

 Hi all,

 I'm just getting started with Bacula and wondering about the best strategy
 for setting up file storage naming.  Is there a best practice, perhaps,
 or do people have suggestions?

 I have about 15 systems to back up to about 8 TB of disk.  I know
 that I could create a single volume - labelled say, BIG - or I
 could go to the other extreme and have a a daily volume for each
 system, e.g. ServerA_Monday, ServerB_Tuesday.  I was thinking of either
 one for each system (ServerA, ServerB, etc) or by using Use Volume Once,
 having something like ServerA001, ServerA002, ServerB001, etc.  My only
 reasoning for this is to try and keep things simple.  Is there any
 advantage to having lots of small files as compared to a few large ones?

 Thanks for any feedback.

 Cheers,
 Lyn

I'm becoming a big fan of using vchanger to provide a virtual tape
library.  I was originally using vchanger to make it easier to manage
storage on USB drives for offsite rotations, but I'm actually going to
start configuring my normal backups to use it too.  Basically, I set a
maximum volume size and maximum number of volumes in the pool.  For
instance, if I have a tape library of 100 volumes at 5GB each, then I
know I will never exceed 500GB of storage.  Of course, you choose
volume sizes appropriate to your needs.  In your case, with 8TB of
backup storage, you might want to consider something like one hundred
and sixty 50GB or eighty 100GB volumes.  There is always overhead in
Bacula for switching from one volume to the other.  If you don't have
a need for a small file size, then you can reduce the total amount of
this overhead by increasing volume size.  Make vchanger aware of the
max volume count and have it initialize the pre-set number of volumes
which later get labeled in bconsole using 'label barcodes'.

All normal backup jobs point to the same pool and I don't have to
worry about volume naming at all since restores will query the
database anyway.  The command line tools to restore without database
still work on these volumes in a pinch.  These become easier to work
with if all of the data you're looking to restore is in a single
volume  It's another good reason for large volume sizes in this
configuration.

I set volume retention short but job and file retention long.  This
allows Bacula to automatically recycle the oldest volume when it runs
out of space without purging the jobs or files until the oldest volume
is actually recycled.  If you want to hold onto your data for at least
a month and make Bacula prompt you when it can't do that given the
pool's capacity, you can set your volume retention to 1 month.  That
keeps Bacula from overwriting the oldest data too soon.  This all
amounts to Bacula automatically giving you maximum retention period
for the disk space that you have while honoring a minimum retention
period as determined by your company's IT policies.

This makes the system work a lot more like a tape-based backup system
where the tapes have real capacity limits and you don't care about the
names of the tapes.  It really is a decent and cheap software
implementation of a VTL.

Now if I could just get some hardware accelerated compression between
Bacula and these files, I would be in heaven.  Perhaps someone will
write a routine to allow Bacula to offload compression to a GPU.  Just
a wish list item for anyone out there who's looking for something
unnecessary (but very cool) to code up . . .

Eric

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need some help

2011-08-15 Thread Eric Pratt
2011/8/15 Ignacio Cardona ignaciocard...@gmail.com:
 Dear all,
     I will be very glad if someone could help me.  I have decided
 after many recommendations to implement Bacula software in the environment
 of my e-commerce. The version that I'm trying to test is Bacula 5.0.3. I
 have started reading the docs and all the stuff but I am completly mess up
 with the information. I have experience in backup tools (TSM, NetBackup,
 BackupEXEC). But in this case I cannot follow the installation guide so if
 someone could give me a hand by giving some advices or at least how to begin
 the instalation (I have downloaded the depackes and the Bacula 5.0.3) but my
 problem starts when I have to perform de ./configure I don't know how to
 configure it I mean when I use cat configure it looks like a script I have
 to edit it or I'm suppose to edit another different file? Could help me with
 that or give me some kind of guide more organize than the one that is on
 bacula.com or at least tell me the order in which I am suppose to follow it.

 Thanks in advance!
 Don't hesitate in contact me for further questions!

 Ignacio Cardona.

You should check with your operating system's package manager to see
if it already has a bacula package.  Then you can just install the
packages from your package manager and ignore compiling bacula (ie. no
need to worry about ./configure.)

If your operating system does not have a package, you can always run
this command to get help with the configure script:

 ./configure --help

Generally, if you don't think you need any specific compile-time
options enabled, you can just type this:

 ./configure  make  make install

Make sure you are root when you run this.  The configure script will
automatically scan your system for you.  It prepares the software so
it can be compiled for your system.  Unless you get an error that
causes configure to exit, you wont have to do anything more than
what's above to get the software configured, compiled, and installed.
After that, you can start following the reference manual on
configuring your directory, storage, and file daemons.

Eric

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate File-Storage without autochanger to vchanger Skript?

2011-08-08 Thread Eric Pratt
I am in the process of doing exactly this same thing but for a different reason.

If only a single job can access your storage at a time, then you
should look for a Maximum Concurrent Jobs directive in your
configuration files.  You can find this in a number of different
resources.  So just grep for it and track down any resources that
might be limiting the number of jobs writing concurrently to the same
storage/device.  Also be sure to check the manual on Maximum
Concurrent Jobs directive for each resource.  Even if you're not
setting this anywhere, they have default values and there is at least
one of these that is '1' by default.  From your description, I would
check specifically for a Maximum Concurrent Jobs directive set to '1'
in the device resource of the storage daemon and/or in the storage
resource of the storage daemon.

I also thought about the different methods of migrating the data, but
decided it wasn't worth doing.  I have vchanger auto-create the new
volumes with initmag.  I then use Bacula's 'label barcodes' command to
bulk label the empty volumes vchanger created.  I tell the existing
Bacula job to start using the new autochanger device but I keep the
old volumes on disk until their files and jobs expire.  When they do
expire, I then remove the old volumes from disk and use 'delete
volume' in Bacula.  Once you've done that for all old volumes, you're
completely migrated to vchanger.

The reason I chose this method is that the data in these volumes can
remain available for restores until their original job and file
retention periods expire.  Since you use the same job resource
regardless of the storage device, you still get the same file and job
retentions automatically applied to the files and jobs stored in the
older volumes.  This continues to clean your database and allows your
backups to continue in the way you want.  And of course, you only have
minimal work to clean up the old volumes and files.  If you have a
retention period of a year or less and are not moving large amounts of
changed data per night, this is probably a good option for you.  If
your backups have retention periods in the years and are moving large
amounts of changed data per night, this will still work for you, but
may not be the best option.

Eric

On Sun, Aug 7, 2011 at 1:51 AM, Adrian Reyer bacula-li...@lihas.de wrote:
 Hi,

 my VirtualFull backups keep failing on me as I have many of them. They
 run from 2 different File Pools and target 1 autochanger Tape library.
 No matter how many concurrent jobs I declare and despite I activate
 Spool Data and it actually does the spooling, only a single job can
 access a singel File storage at a time. So now one job waits for the
 autochanger while the other waits for the File and so they are in a
 deadlock.
 The idea is now to just use the vchanger and provide multiple File
 drives to bacula to get rid of the issue. Obviously the File-Volumes are
 already written and labeled and the naming is incompatibe to the
 vchanger-names.
 - Is there a different way to solve this deadlock situation permanently,
  even if the jobs have to wait some time after being scheduled because of
  other jobs blocking the devices?
 - Is there an easy way to make the current File volumes known to the
  vchanger and Bacula?
  The current approach would be to stop bacula, move and rename the
  files and update the occurances of the File names in Job and Media
  tables.

 Regards,
        Adrian
 --
 LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
 Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
 Mail: li...@lihas.de - Web: http://lihas.de
 Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

 --
 BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
 The must-attend event for mobile developers. Connect with experts.
 Get tools for creating Super Apps. See the latest technologies.
 Sessions, hands-on labs, demos  much more. Register early  save!
 http://p.sf.net/sfu/rim-blackberry-1
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users