Thank you, Marco! I will try splitting the backup of the large system into
several smaller jobs and staggering it. If that is feasible with the
number/size of files in the production storage, it sounds like an excellent
solution!

Patrick

On Wed, Oct 21, 2015 at 6:58 AM, Marco Weiß <[email protected]>
wrote:

> Hi Patrick,
>
> after posting i realized that my answer is not right ;) You are already
> doing a virtual full. Ok
>
> Now your problem is that you do not have enough time to get a virtual full
> out of your backups and you do not have enough storage to get a virtual
> full because bareos copies the data first into the new full and deletes the
> old after that right?
>
> If i'm right i don't think that this functionality is implemented in
> Bareos but maybe can enlight us here ;)
>
> We had the same problem with a few TByte and resolved it by splitting the
> backups into a few jobs.
> With more jobs you can distribute your full backups on different days.
> We do every day a full backup of different storage slices so you don't
> have to do a full backup of all your data in one backup window
>
> Regards Marco
>
>
> On Wednesday, October 21, 2015 at 12:40:07 PM UTC+2, Marco Weiß wrote:
> > Hi Patrick,
> >
> > nice to hear that you got it running! And thank you for sharing your
> knowledge!
> >
> > For your the new question i think "virtual full" is the solution you are
> searching for.
> > Have look into the documentation.
> >
> > With that option you can do an virtual full backup based on the data on
> your backup storage without getting the data from the client again.
> >
> > Regards Marco
> >
> > On Tuesday, October 20, 2015 at 7:01:56 PM UTC+2, Patrick Glomski wrote:
> > > Hey, Marco. Apologies on taking so long to respond. I dug into the
> manual more, specifically the section dealing with Migration jobs.
> Apparently bareos will not allow you to do a virtual full backup from one
> storage device to the same storage device (the manual even mentioned the
> need for different media types).
> > >
> > >
> > > I created a new pool (Brick01Storage_virt) and assigned it as the
> 'Next Pool' in my other pool definitions (I've appended sanitized pool and
> storage definitions). Virtual Full backups will now run successfully.
> However, now I have a serious problem with both backup time and storage
> capacity. I was looking for functionality like "Merge everything before
> this date into a new Full backup to use as a baseline for incrementals and
> differentials". The functionality I'm currently getting is "Make a merged
> copy of the data in another storage pool". As my production system contains
> a Petabyte of clustered storage, the model I have implemented isn't
> reasonable for daily backups.
> > >
> > >
> > > The goal is currently to keep 10 days of incrementals. I can't do a
> "Full" backup of this much data every ten days and I also can't copy/merge
> a petabyte (and growing) of data in that time (nor do I have the 2x space
> needed to do this). Is there any option in bareos for an in-place (or near
> to in-place) merge of old jobs?
> > >
> > >
> > > If not, is there any other solution you can think of for this type of
> problem?
> > >
> > >
> > >
> > > I appreciate your time,
> > >
> > >
> > > Patrick
>
> --
> You received this message because you are subscribed to the Google Groups
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to