On Wednesday 22 August 2007 05:28, Charles Sprickman wrote:
> On Tue, 21 Aug 2007, Nick Pope wrote:
> > On Aug 21, 2007, at 8:25 PM, Charles Sprickman wrote:
> >> On Tue, 21 Aug 2007, Nick Pope wrote:
> >>> On Aug 20, 2007, at 12:48 PM, David Boyes wrote:
> >>>>> I would like to second this. Right now I have duplicates of
> >>>>> everything
> >>>>> to first do a local backup and 7 hours later another backup of the
> >>>>
> >>>> same
> >>>>
> >>>>> data (but without the scripts and longer runtime) to an offsite
> >>>>
> >>>> storage
> >>>>
> >>>>> to mirror the data.
> >>>>
> >>>> In our shop, this wouldn't be sufficient to satisfy the auditors. The
> >>>> contents of the systems could have changed, and thus the replica is
> >>>> not
> >>>> a provably correct copy.
> >>>
> >>> It seems that an interim step that involves less effort is possible.
> >>> Couldn't the migrate capability be altered ever so slightly to allow
> >>> the "migration" of a job without purging the old job from the
> >>> catalog?  This would allow bitwise identical backup(s) to be created
> >>> without having to create a forking/muxing SD/FD.
> >>>
> >>> This, of course, does not create the identical backups at the same
> >>> instant in time, but it would solve the off-site backup problem with
> >>> much less effort.  I'm certainly not discounting the need for a
> >>> muxing SD, but if we got the copy/migrate-without-purge capability
> >>> much faster, would it meet many people's needs?
> >>
> >> It depends...  Think of a case where you've got equipment in a
> >> datacenter. It's unattended, so your tape backups are back at the office
> >> which may have a fairly slow link.  It would be very, very handy to have
> >> the data sent both to a box with a bunch of big disks in the datacenter
> >> (for quick recovery) as well as to a tape drive at the office (more of
> >> an offsite emergency use only sort of thing).
> >>
> >> Charles
> >
> > OK, so you would back up to disk in the data center as usual.  Then, when
> > the disk backup is done, you can spawn a copymigrate job to copy the data
> > down to the tape drives over the slow link.  This is a perfect example
> > where the migrate-without-purge job copying is "good enough" and
> > full-blown parallel backups to multiple pools would not be needed (unless
> > I'm missing something).
>
> Not quite...  The backups happen overnight, which is fine, but that
> migrate/copy job would probably creep into business hours and squash the
> slow office link, or worse, still be running when the next night's backup
> starts.
>
> I'm just thinking pie-in-the sky at this point.  I tend to work with
> places that don't have lots of capital, so we have to kludge lots of
> stuff.  While I'm dreaming, I'd love to have a way to push data off to
> Amazon S3 storage...

Over the last 7.5 years, I've spent most of my time getting new features in 
Bacula that serve the majority of users.  My personal efforts for the next 
version, possibly two, will be concentrated on implementing features needed 
by large enterprises.  The main reason for this is that I see the enterprise 
market opening quickly and with the likes of companies like Zmanda, not only 
will we get proprietary software that claims to be Open Source into those 
enterprises, but it will be 20 year old technology.  

>
> > I guess the point I'm making is that I'd vote for a simpler version of
> > the job copying feature that would work in a serial fashion using a very
> > slightly modified migrate job if we could get it much sooner than the
> > parallel muxing SD that could send jobs to multiple places at once.
>
> How would this migrate work in the example I cited where I'd be migrating
> to tape, but my most common restores would be coming out of the disk pool?
>
> I wish I'd started earlier in this thread, I'm coming from Amanda and
> there are a few things there worth stealing:
>
> -spooling unlimited backups to disk so that if you have tape problems or
> just can't get someone to change tapes your backups still run

If you set your spool size very large, this is exactly what Bacula will do.  
It is *extremely* rare that the whole Bacula job fails because of tape 
problems.

>
> -a "smart" scheduler/planner, although I love the fact that Bacula is not
> so strict about how you design your tape rotation.  "smart" meaning that
> you don't have to manually deal with missed tape loads, or tell it that if
> you missed a night not to run two incrementals back-to-back, etc.

One of my personal projects that no one seems to have submitted as a Feature 
Request is to have a directive that will allow only one job of a given name 
to run at a time, it will upgrade the job if necessary.  The other part is a 
way to tell the scheduler to run a Full at least once every x days, so if a 
Full fails, the next job started will be automatically upgraded.

>
> -an option to run native dumps so you can (easily) get things like
> snapshot support (dump -L on FreeBSD).

Good snapshot support is available with LVM, and Bacula in 2.2.0 can now deal 
very gracefully with snapshots made into subdirectories.

>
> -a smarter reporting system that can send you the day's job output in one
> big email and alert you to what tape needs loading next
>
> You'll note of course these are all features that benefit changer-less
> people. :)
>
> thanks,
>
> Charles
>
> > Now this is all premised on a huge assumption: that a basic
> > migrate/copy-without-purge would be MUCH simpler/quicker to implement
> > than a muxing SD that could copy to multiple pools at once.  This may not
> > be the case.
> >
> > -Nick

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to