Re: [Bacula-users] Disk backup strategy advice / help

2012-01-17 Thread Sebastien Douche
On Mon, Jan 16, 2012 at 19:15, Adrian Reyer  wrote:
> Well, as you copy the matching catalog as well it should just be fine.
> It depends all on what you want to be your backup be for. In my case I
> want to be save from loosing single/some backup media while the backup
> server is still fine as I run redundant servers anyway. You obviously
> plan for a complete backup system breakdown on the expense of a harder
> time while restoring single lost media. On the other hand, if I
> experience a complete backup server outage I have to bscan the offsite
> tapes.
> Benefits of both ways you get by e.g.
> - copyjob for media offsite, if some medium fail, the copy job gets
>  active if you delete it.
> - sql-server replication offsite, alternatively dump&restore
> - having all configuration files ready offsite. As I run
>  'Linux-VServers' I would just rsync the backup-server itself.

Thank you Adrian for your explanation.

-- 
Sebastien Douche 
Twitter: @sdouche / G+: +sdouche

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-16 Thread Adrian Reyer
Hi Sebastian,

On Mon, Jan 16, 2012 at 11:38:50AM +0100, Sebastien Douche wrote:
> > I think bacula is not the ideal tool for running additional offsite
> > backups. And very likely rsync is not a good way if you use bacula.
> I rsync data, catalog and bsr files on external disks and I would know
> what it's not a goold solution.

Well, as you copy the matching catalog as well it should just be fine.
It depends all on what you want to be your backup be for. In my case I
want to be save from loosing single/some backup media while the backup
server is still fine as I run redundant servers anyway. You obviously
plan for a complete backup system breakdown on the expense of a harder
time while restoring single lost media. On the other hand, if I
experience a complete backup server outage I have to bscan the offsite
tapes.
Benefits of both ways you get by e.g.
- copyjob for media offsite, if some medium fail, the copy job gets
  active if you delete it.
- sql-server replication offsite, alternatively dump&restore
- having all configuration files ready offsite. As I run
  'Linux-VServers' I would just rsync the backup-server itself.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-16 Thread Sebastien Douche
On Wed, Jan 4, 2012 at 22:52, Adrian Reyer  wrote:

Hi Adrian

> I think bacula is not the ideal tool for running additional offsite
> backups. And very likely rsync is not a good way if you use bacula.

I rsync data, catalog and bsr files on external disks and I would know
what it's not a goold solution.



-- 
Sebastien Douche 
Twitter: @sdouche / G+: +sdouche

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-05 Thread Alex Ehrlich
Hello,

I have managed my offsite backup setup so that only bacula volumes get 
rsync-ed and it works fine for half a year. Total backup size is about 
500Gb, nightly amount of data rsync-ed is between 1 and 10Gb (so my home 
adsl connection with 10Mbit/s downstream is OK to keep offsite backups 
at home).
This is in an assumption that you need offsite backup only when you've 
totally lost onsite one. In this case bacula server has to be 
restored/reinstalled first (or better in advance, to be able to test 
restoration) and database has to be restored before using offsite backup.
Additionally I encrypt the bacula volumes on the fly while remotely 
rsync-ing them (on the fly by means of fuse encfs).

Regards,

Alex Ehrlich


--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-05 Thread keith
On 04/01/2012 21:52, Adrian Reyer wrote:
> On Wed, Jan 04, 2012 at 12:16:55PM +, keith wrote:
>> 460M Dec 29 23:19 Full-0001
>> 26.3G Dec 30 23:52 Full-0003
>> 702MDec 24 23:05 Inc-0001
>> 10.0GDec 30 01:54 Inc-0002
>> 2.3G  Dec 31 00:06 Inc-0004
>> 3.1G  Dec 31 00:56 Inc-0005
>> 611MDec 31 00:56 Inc-0006
> Is the 10G on 24th mostly additional, changed or moved data? Are these
> compressed backups?
Hi Adrian, The 10G was just a new additional client being introduced 
into the Bacula.
>
>> Now that the backups seems to be working I need to figure out how to
>> implement an offsite strategy, I want to use a combination of removable
>> disks and rsync to do this.
> I think bacula is not the ideal tool for running additional offsite
> backups. And very likely rsync is not a good way if you use bacula.
Oh ok
>
> I have 3 possibilities in mind:
>
> 1. If you are not talking about windows clients, I'd consider using rsync
> (e.g. via rsnapshot) to run the complete offsite backup unrelated to
> bacula. Run one rsync/rsnapshot job per client and the 'new' client will
> just run longer, independent of the others except the shared bandwidth.
> Via rsnapshot you only need to do 1 full backup per client, changed
> files just lead to new full backupsets, but only the difference needs
> to be transferred. We do that on several locations and wrote a wrapper
> round rsnapshot (which is a wrapper round rsync), debian packages are
> available at
> deb http://ftp.lihas.de/debian stable main
> package rsnapshot-backup.
> If you add some file unification tool, you get away with far less used
> diskspace.
> + only changes need to be transferred
> + initial backup can easily be transferred on external media to save bandwidth
> - no bacula, no bacula indexes
> - no backup of windows clients / anything that doesn't have rsync
We have some Unix servers but the bulk of our servers are Windows.
Our old/current backup strategy is to do full backups nightly and these 
are about 450G Compressed.

If I could I would like to do some type of copy jobs where I copy the 
Incremental files to another place on the server and then get rsync to 
down these files knowing that they were just the incrementals.

>
> 2. Alternatively you can use the normal bacula backup + a copy job.
> As copy jobs only work on same bacula-sd, you could e.g. NFS-mount some
> external server and store the target pools there. The copy full pool is
> to local disks on individual mountpoints. Move the volumes to the remote
> location and replace it with links to remote NFS.
> + works with all clients
> - regularily transporting volumes offsite is required

Only just read about "Migration / Copy" jobs last night (I am slowly 
getting through the Bacula manual) and will probably try to get one of 
these jobs working later today.  I plan to have one dedicated bacula-fd 
server and have also planned to put the removable disks (Offsite Backup) 
into this server. If I can get Bacula manage my offsite disks and to 
also know whats on these disk will be great.

>
> 3. Run a complete seperate job instance to the remote site using a
> bacula-sd installed there. Use virtual full backups to create the fulls
> from the full/diff/inc backups. Initially a full backup has to pass the
> remote connection.
> + works with all clients
> 0 initial full might be expensive in bandwidth
>
> Currently I use 1. and 2. myself. With 3. I ran into trouble selecting
> the correct pools in my environment and virtual full in general
> including a tape changer with a single drive.
There are network / firewall issue outwith my control that would make remote 
Bacula backups an issue.


>> If I add a new server to be backed up to Bacula midweek it does a full
>> backup in the INC pool. This might be a big backup and screw-up my Rsync
>> job.
>> Does this seem like a good idea and goes anyone know how keep Full
>> backups out of the INC or DIFF pool
> Just do a manual initial full backup on the new client. As I assume they
> don't appear magically in your backup setup.
>
> Regards,
>   Adrian

Your right, they shouldn't just be appearing but they are while I play 
with Bacula :>)  But in the future when adding new clients in then it 
makes sense to manually kick off a full backup.


Adrian, thanks for the detailed answers. I give the copy jobs a try and 
see how I get on.

Cheers
Keith.



--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Re: [Bacula-users] Disk backup strategy advice / help

2012-01-04 Thread Adrian Reyer
On Wed, Jan 04, 2012 at 12:16:55PM +, keith wrote:
> 460M Dec 29 23:19 Full-0001
> 26.3G Dec 30 23:52 Full-0003
> 702MDec 24 23:05 Inc-0001
> 10.0GDec 30 01:54 Inc-0002
> 2.3G  Dec 31 00:06 Inc-0004
> 3.1G  Dec 31 00:56 Inc-0005
> 611MDec 31 00:56 Inc-0006

Is the 10G on 24th mostly additional, changed or moved data? Are these
compressed backups?

> Now that the backups seems to be working I need to figure out how to 
> implement an offsite strategy, I want to use a combination of removable 
> disks and rsync to do this.

I think bacula is not the ideal tool for running additional offsite
backups. And very likely rsync is not a good way if you use bacula.

I have 3 possibilities in mind:

1. If you are not talking about windows clients, I'd consider using rsync
(e.g. via rsnapshot) to run the complete offsite backup unrelated to
bacula. Run one rsync/rsnapshot job per client and the 'new' client will
just run longer, independent of the others except the shared bandwidth.
Via rsnapshot you only need to do 1 full backup per client, changed
files just lead to new full backupsets, but only the difference needs
to be transferred. We do that on several locations and wrote a wrapper
round rsnapshot (which is a wrapper round rsync), debian packages are
available at
deb http://ftp.lihas.de/debian stable main
package rsnapshot-backup.
If you add some file unification tool, you get away with far less used
diskspace.
+ only changes need to be transferred
+ initial backup can easily be transferred on external media to save bandwidth
- no bacula, no bacula indexes
- no backup of windows clients / anything that doesn't have rsync

2. Alternatively you can use the normal bacula backup + a copy job.
As copy jobs only work on same bacula-sd, you could e.g. NFS-mount some
external server and store the target pools there. The copy full pool is
to local disks on individual mountpoints. Move the volumes to the remote
location and replace it with links to remote NFS.
+ works with all clients
- regularily transporting volumes offsite is required

3. Run a complete seperate job instance to the remote site using a
bacula-sd installed there. Use virtual full backups to create the fulls
from the full/diff/inc backups. Initially a full backup has to pass the
remote connection.
+ works with all clients
0 initial full might be expensive in bandwidth

Currently I use 1. and 2. myself. With 3. I ran into trouble selecting
the correct pools in my environment and virtual full in general
including a tape changer with a single drive.

> If I add a new server to be backed up to Bacula midweek it does a full 
> backup in the INC pool. This might be a big backup and screw-up my Rsync 
> job.
> Does this seem like a good idea and goes anyone know how keep Full 
> backups out of the INC or DIFF pool

Just do a manual initial full backup on the new client. As I assume they
don't appear magically in your backup setup.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk backup strategy advice / help

2012-01-04 Thread keith
I have Bacula 5.2.3 up and running and need some advice with the following

I used this tutorial 
http://bacula.org/fr/dev-manual/Automated_Disk_Backup.html as a starting 
point and it's working well and I have backups appearing as follows...

460M Dec 29 23:19 Full-0001
26.3G Dec 30 23:52 Full-0003

702MDec 24 23:05 Inc-0001
10.0GDec 30 01:54 Inc-0002
2.3G  Dec 31 00:06 Inc-0004
3.1G  Dec 31 00:56 Inc-0005
611MDec 31 00:56 Inc-0006


Now that the backups seems to be working I need to figure out how to 
implement an offsite strategy, I want to use a combination of removable 
disks and rsync to do this.

 I want to use removable disks to take a backup offsite either 
weekly or forthnightly. (Just copy that most recent Full- file to 
the removable disk)
 I would like to rsync the daily INC & Weekly Diff backups offsite 
if possible (100MB Link).

My plan is that for complete recovery I will use a combination of the 
full backup that i will get from the Removable disks and the Rsync'd INC 
/ Diff

If I add a new server to be backed up to Bacula midweek it does a full 
backup in the INC pool. This might be a big backup and screw-up my Rsync 
job.

Does this seem like a good idea and goes anyone know how keep Full 
backups out of the INC or DIFF pool

Thanks
Keith







--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup

2011-09-29 Thread John Drescher
On Thu, Sep 29, 2011 at 10:53 AM, Josh Fisher  wrote:
>
> On 9/29/2011 10:35 AM, John Drescher wrote:
>> 2011/9/29 Ignacio Cardona:
>>> Dear all,
>>>              I need some help about a little issue. At the momento I am
>>> backing up to several devices but the problem is that i am running
>>>   out of space in my hard disk. Is possible to burn up some backups into
>>> dvd´s in order to erase the backups in the hard drive?.
>> Yes but you need 4GB disk volumes.
>
> Also, consider that a 250 GB USB external drive can be had for little
> more than the price of a stack of DVD-R media.
>
And a lot less work. I was thinking of that after my first post..

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup

2011-09-29 Thread Josh Fisher

On 9/29/2011 10:35 AM, John Drescher wrote:
> 2011/9/29 Ignacio Cardona:
>> Dear all,
>>  I need some help about a little issue. At the momento I am
>> backing up to several devices but the problem is that i am running
>>   out of space in my hard disk. Is possible to burn up some backups into
>> dvd´s in order to erase the backups in the hard drive?.
> Yes but you need 4GB disk volumes.

Also, consider that a 250 GB USB external drive can be had for little 
more than the price of a stack of DVD-R media.

>> If possible , when I
>> need to perform a restore bacula will know where is the file?
>>
> When bacula does not find a volume it will ask you to mount it. At
> that time you will have to copy the volume to the disk.
>
> John
>
> --
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup

2011-09-29 Thread John Drescher
2011/9/29 Ignacio Cardona :
> Dear all,
>     I need some help about a little issue. At the momento I am
> backing up to several devices but the problem is that i am running
>  out of space in my hard disk. Is possible to burn up some backups into
> dvd´s in order to erase the backups in the hard drive?.

Yes but you need 4GB disk volumes.

> If possible , when I
> need to perform a restore bacula will know where is the file?
>
When bacula does not find a volume it will ask you to mount it. At
that time you will have to copy the volume to the disk.

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk backup

2011-09-29 Thread Ignacio Cardona
Dear all,
I need some help about a little issue. At the momento I am
backing up to several devices but the problem is that i am running
 out of space in my hard disk. Is possible to burn up some backups into
dvd´s in order to erase the backups in the hard drive?. If possible , when I
need to perform a restore bacula will know where is the file?

Thanks in advance!
Don't hesitate in contact me for further information.
-- 
--
Gracias!
Saludos!´

Ignacio Ariel Cardona.
IT Consultant. (SAN & NAS Specialist)
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup -- swapping out disks?

2011-08-08 Thread Kleber Leal
Ok, I think you not understood me.

When you does a copy job, the bacula will prefer to restore from local
disks, since the has not recycled.
I have a bacula configured with copy jobs, the off site media is updated
twice a month and all restore in local media retention time is done without
needed of my off site medias.

Read about copy and migration jobs in main manual. It´s for you.

Kleber


2011/8/8 hymie! 

>
> Kleber Leal writes:
> >--000e0cd31382b952a604a9ff1986
> >Content-Type: text/plain; charset=ISO-8859-1
> >
> >You can use disks off-site making *copy jobs* too. This is better when you
> >need do a restore, since you will not need get off site disk.
>
> You're right about the need to get the off-site disks.  Sadly, the
> bandwidth of my car far exceeds my available network bandwidth.
> Fortunately, we're just at the planning stage, and this is just one
> option.
>
> --hymie!http://lactose.homelinux.net/~hymie
> hy...@lactose.homelinux.net
>
> ---
>
--
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos & much more. Register early & save!
http://p.sf.net/sfu/rim-blackberry-1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk backup -- swapping out disks?

2011-08-08 Thread hymie!

So the question has come up about storing our backups off-site.

We're using disk backups, with two USB disks.  Each disk is a
Storage, and each Storage has a single Pool, and each Pool has max
200 4GB volumes.  Part of my configuration is below.

What I'd like to do is buy a second matching set of USB disks,
take the existing disks off, mount the new disks in the same place,
and just have Bacula pick up where it left off, seeing no volumes
and creating them as it needs to.

(We do full backups once a month anyway, and I would probably do it that
morning.)

Then in a month, I will do it again, take the disks off, put the
old disks back in, and Bacula will just notice all of its old volumes
are back.

Is this a workable solution?

--hymie!http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net
---

(from bacula-sd.conf)

Device {
  Name = FileStorage
  Device Type = File
  Media Type = File
  Archive Device = /storage
  Maximum Concurrent Jobs = 1
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = yes;
  AlwaysOpen = yes;
}

(from bacula-dir.conf)

Storage {
  Name = File
  Address = 10.0.xxx.xxx  # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = ""
  Maximum Concurrent Jobs = 1
  Device = FileStorage
  Media Type = File
}
Pool {   
  Name = Pool1
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 6 months
  Maximum Volume Bytes = 4g
  LabelFormat = "S1Vol"
  Maximum Volumes = 200
  Action On Purge = Truncate
}


--
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos & much more. Register early & save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk Backup / LabelMedia / UseVolumeOnce / VolumeRetention

2009-08-19 Thread ganiuszka


jaschu wrote:
> 
> Now the question: Will old Volumes be deleted automatically, after
> VolumeRetention has passed, or will they remain on disk, in which case I
> would have to delete them manually? 
> 
Hi,

>From Bacula documentation about volume recycling:

"...when Bacula recycles a Volume, the Volume becomes available for being
reused, and Bacula can at some later time overwrite the previous contents of
that Volume. Thus all previous data will be lost. If the Volume is a tape,
the tape will be rewritten from the beginning. If the Volume is a disk file,
the file will be truncated before being rewritten."

Here you can find more information:
http://bacula.org/manuals/en/concepts/concepts/Automatic_Volume_Recycling.html

gani
-- 
View this message in context: 
http://www.nabble.com/Disk-Backup---LabelMedia---UseVolumeOnce---VolumeRetention-tp25042878p25046457.html
Sent from the Bacula - Users mailing list archive at Nabble.com.


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk Backup / LabelMedia / UseVolumeOnce / VolumeRetention

2009-08-19 Thread John Drescher
> I want to backup to disk, having each Job in a separate Volume.
>
> For this purpose, I have in bacula-dir.conf
> Pool {
>  UseVolumeOnce   = yes
>  VolumeRetention = 30 days
>  AutoPrune       = yes
>  ...
> }
>
> ... and in bacula-sd.conf
> Device {
>  LabelMedia     = yes
>  AutomaticMount = yes
>  ...
> }
>
> Now the question: Will old Volumes be deleted automatically, after 
> VolumeRetention has passed, or will they remain on disk, in which case I 
> would have to delete them manually?
>

They will remain. Bacula does not delete volumes at all. Also if you
changed the VolumeRetention in the pool and you do not do an update
Pool from resource from a bacula console the old retention periods
will stay on all existing volumes.

John

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk Backup / LabelMedia / UseVolumeOnce / VolumeRetention

2009-08-19 Thread Jan Schulze
Hi all,

I want to backup to disk, having each Job in a separate Volume. 

For this purpose, I have in bacula-dir.conf
Pool {
  UseVolumeOnce   = yes
  VolumeRetention = 30 days
  AutoPrune   = yes
  ...
}

... and in bacula-sd.conf
Device {
  LabelMedia = yes
  AutomaticMount = yes
  ...
}

Now the question: Will old Volumes be deleted automatically, after 
VolumeRetention has passed, or will they remain on disk, in which case I would 
have to delete them manually? 


Best Regards,
Jan

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup: how to delete purged files on disk?

2008-10-22 Thread Annette Jäkel
Am 20.10.2008 18:02 Uhr schrieb "Kevin Keane" unter
<[EMAIL PROTECTED]>:

> John Drescher wrote:
>>> I am learning how to use bacula to back up to external hard disks.
>>> Everything works beautifully, but I notice that after a volume is purged
>>> or pruned, it gets status "Recycled" and the actual file on disk stays.
>>> That looks like perfect behavior for a tape backup, but for a hard disk
>>> backup, I would like to be able to completely delete the volume from the
>>> database, and also delete the corresponding file on the hard disk so it
>>> no longer takes up space.
>>> 
>>> 
>> Bacula does not have this feature. I would just limit the disk volumes
>> to a few GB and at the proper time the disk volumes will be reused.
>> 
>> John
>>   
> That might well be a good solution, and address a couple other things,
> too. I suppose I have been too used to tar-style backups, where each
> backup ended up in a separate file. But you are right, it doesn't have
> to be organized that way.
> 
> Do you by any chance have a sample bacula-dir.conf that would illustrate
> how this would work?
> 
> What would be a good choice for the volume limit? I am backing up a
> total of seven machines; the largest is a 70 GB backup (full), the
> smallest is less than 1 GB. I would like to keep them all in the same
> pool because I quite frequently add or remove machines, and don't want
> to have to reorganize the pools all the time.
> 
> A second question I was going to post in a separate thread, but now it
> seems related: I have two external USB/eSATA drives, and can't figure
> out how to get bacula to rotate them on a weekly basis so I can take one
> of them off site.
> 

Think you can do this by two jobs, two pools, two schedules, same fileset.
I'm not shure about the Storage definition.
My configuration for second level tape backup is done in this manner. All
volume files from daily disk backup are backed up to tape. Tape jobs and
tape pools rotate over three month, so I can get the always third tape pool
out of the tape library. In my case the storage definition is always the
same - the definition of the library and the autochanger.
BTW: I define in Jobdefs for my three jobs "Prefer Mounted Volumes = no"
 
Hope this helps.
Annette

> Kevin
> 
> 
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup: how to delete purged files on disk?

2008-10-20 Thread Kevin Keane
John Drescher wrote:
>> I am learning how to use bacula to back up to external hard disks.
>> Everything works beautifully, but I notice that after a volume is purged
>> or pruned, it gets status "Recycled" and the actual file on disk stays.
>> That looks like perfect behavior for a tape backup, but for a hard disk
>> backup, I would like to be able to completely delete the volume from the
>> database, and also delete the corresponding file on the hard disk so it
>> no longer takes up space.
>>
>> 
> Bacula does not have this feature. I would just limit the disk volumes
> to a few GB and at the proper time the disk volumes will be reused.
>
> John
>   
That might well be a good solution, and address a couple other things, 
too. I suppose I have been too used to tar-style backups, where each 
backup ended up in a separate file. But you are right, it doesn't have 
to be organized that way.

Do you by any chance have a sample bacula-dir.conf that would illustrate 
how this would work?

What would be a good choice for the volume limit? I am backing up a 
total of seven machines; the largest is a 70 GB backup (full), the 
smallest is less than 1 GB. I would like to keep them all in the same 
pool because I quite frequently add or remove machines, and don't want 
to have to reorganize the pools all the time.

A second question I was going to post in a separate thread, but now it 
seems related: I have two external USB/eSATA drives, and can't figure 
out how to get bacula to rotate them on a weekly basis so I can take one 
of them off site.

Kevin


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup: how to delete purged files on disk?

2008-10-20 Thread John Drescher
> I am learning how to use bacula to back up to external hard disks.
> Everything works beautifully, but I notice that after a volume is purged
> or pruned, it gets status "Recycled" and the actual file on disk stays.
> That looks like perfect behavior for a tape backup, but for a hard disk
> backup, I would like to be able to completely delete the volume from the
> database, and also delete the corresponding file on the hard disk so it
> no longer takes up space.
>
Bacula does not have this feature. I would just limit the disk volumes
to a few GB and at the proper time the disk volumes will be reused.

John

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk backup: how to delete purged files on disk?

2008-10-20 Thread Kevin Keane
Maybe this is an FAQ, but I didn't find the answer...

I am learning how to use bacula to back up to external hard disks. 
Everything works beautifully, but I notice that after a volume is purged 
or pruned, it gets status "Recycled" and the actual file on disk stays. 
That looks like perfect behavior for a tape backup, but for a hard disk 
backup, I would like to be able to completely delete the volume from the 
database, and also delete the corresponding file on the hard disk so it 
no longer takes up space.

I'm using bacula version 2.4.2

Thanks!


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup recycling

2007-11-01 Thread Radek Hladik

Hi,
	option 0 is the best one however there are financial drawbacks :-) The 
whole situation is like this. I have bacula server with two remote SAN 
connected drives. SAN does mirroring etc and SAN drives are considered 
stable and safe.
I have the backup rotation schema with 1 weekly full backup and 6 
differential/incremental backups. I need to backup various routers, 
servers, important workstations etc... There is cca 20 clients now. 
Total storage needed to be backed up is like 200-250GB. Number of 
clients should increase in time.
Since now we used simple ssh+tar solution. I would like to use Bacula to 
"tidy up" the whole process and make it more reliable and robust. The 
disk space on SAN is expensive and "precious" and I would like to use it 
reasonably. So I have no problem with connecting each client after 
another, performing full backup to a new volume and deleting old full 
backup afterwards - this is how it now works with ssh+tar. Client 
connects, backups to temporary file and when backup is complete, old 
backup is deleted and temporary file is renamed. Clients are backed up 
one after another so I do not need so much overhead disk space. I have 
no problem finding reason for extra space for one full backup.
I am still considering other options like spooling to local SATA drive, 
or backing up to local drive and synchronizing to SAN drives, but every 
solution has some disadvantages...
Full catalog backup will be performed on SAN drives every day and to 
remote servers. I find it as the most valuable data to be backed up :-)


Radek


Marek Simon napsal(a):

My opinion to your ideas:
0) Leave the schema as I submited and buy more disk space for backuping. :-)

1) It is best variant I think. The other advantage is that the full 
backup of all clients would take much longer time then 1/7th full and 
other differential. Now what to do with Catalog:

You can backup the catalog to some changable media (tape, CD/DWD-RW).
You can pull the (zipped and may be encrypted) catalog to some or all of 
your clients.
You can send your (zipped and maybe encrypted) Catalog to some friend of 
you (and you can backup his catalog for reciprocation), but it may be a 
violence of the data privacy (even if the Catalog contain only names and 
sizes).
You can forward the bacula messages (completed backups) to some external 
mail address and then if needed you can reconstruct the job-volume 
binding from them.
The complete catalog is too big for sending it by e-mail, but still you 
can do SQL selection in catalog after the backup and send the job-volume 
bindings and some other relevant information to the external email 
address in CSV format.
Still you can (and I strongly recommend to) backup the catalog every 
time after the daily bunch of Jobs and extract it when needed with other 
bacula tools (bextract).


2) I thought, you are in lack of disk space, so you can't afford to have 
the full backup twice plus many differential backups. So I do not see 
the difference if I have two full backups on a device for a day or for 
few hours, I need that space anyway. But I think this variant is better 
to be used it with your original idea: Every full backup volume has its 
own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
do the immediate differential (practicaly zero sized) backup to the 
volume 2 just after the full one and vice-versa in even weeks. 
Priorities could help you as well in this case. May be some check if the 
full backup was good would be advisable, but I am not sure if bacula can 
do this kind of conditional job runs, may be with some python hacking or 
some After Run and Before Run scripts.
You can do the same for differential backups - two volumes in two pools, 
the first is used and the other cleared - in turns.
And finaly, you can combine it with previous solution and divide it to 
sevenths or more parts, but then it would be the real Catalog hell.


3) It is the worst solution. If you want to have bad sleep every Monday 
(or else day), try it. It is realy risky to loose the backup even for a 
while, an accident can strike at any time.


Marek

P.S. I could write it in czech, but the other readers can be interested 
too :-)


Radek Hladik napsal(a):

Hi,
	thanks for your answer. Your idea sounds good. However if I understand 
it correctly, there will be two full backups for the whole day after 
full backup. This is what I am trying to avoid as I will be backing up a 
lot of clients. So as I see it I have these possibilities:


1) use your scheme and divide clients into seven groups. One group will 
start it's full backup on Monday, second on Tuesday, etd.. So I will 
have all the week two full backups for 1/7 clients. This really seems 
like I will need to backup the catalog at least dozen times because no 
one will be able to deduct which backup is on which volume :-)
2) modify your scheme as there will be another differential backup right 
after the full backup before next j

Re: [Bacula-users] Disk backup recycling

2007-10-24 Thread Radek Hladik
Hi,
option 0 is the best one however there are financial drawbacks :-) The
whole situation is like this. I have bacula server with two remote SAN
connected drives. SAN does mirroring etc and SAN drives are considered
stable and safe.
I have the backup rotation schema with 1 weekly full backup and 6
differential/incremental backups. I need to backup various routers,
servers, important workstations etc... There is cca 20 clients now.
Total storage needed to be backed up is like 200-250GB. Number of
clients should increase in time.
Since now we used simple ssh+tar solution. I would like to use Bacula to
"tidy up" the whole process and make it more reliable and robust. The
disk space on SAN is expensive and "precious" and I would like to use it
reasonably. So I have no problem with connecting each client after
another, performing full backup to a new volume and deleting old full
backup afterwards - this is how it now works with ssh+tar. Client
connects, backups to temporary file and when backup is complete, old
backup is deleted and temporary file is renamed. Clients are backed up
one after another so I do not need so much overhead disk space. I have
no problem finding reason for extra space for one full backup.
I am still considering other options like spooling to local SATA drive,
or backing up to local drive and synchronizing to SAN drives, but every
solution has some disadvantages...
Full catalog backup will be performed on SAN drives every day and to
remote servers. I find it as the most valuable data to be backed up :-)

Radek


Marek Simon napsal(a):
> My opinion to your ideas:
> 0) Leave the schema as I submited and buy more disk space for backuping. :-)
> 
> 1) It is best variant I think. The other advantage is that the full 
> backup of all clients would take much longer time then 1/7th full and 
> other differential. Now what to do with Catalog:
> You can backup the catalog to some changable media (tape, CD/DWD-RW).
> You can pull the (zipped and may be encrypted) catalog to some or all of 
> your clients.
> You can send your (zipped and maybe encrypted) Catalog to some friend of 
> you (and you can backup his catalog for reciprocation), but it may be a 
> violence of the data privacy (even if the Catalog contain only names and 
> sizes).
> You can forward the bacula messages (completed backups) to some external 
> mail address and then if needed you can reconstruct the job-volume 
> binding from them.
> The complete catalog is too big for sending it by e-mail, but still you 
> can do SQL selection in catalog after the backup and send the job-volume 
> bindings and some other relevant information to the external email 
> address in CSV format.
> Still you can (and I strongly recommend to) backup the catalog every 
> time after the daily bunch of Jobs and extract it when needed with other 
> bacula tools (bextract).
> 
> 2) I thought, you are in lack of disk space, so you can't afford to have 
> the full backup twice plus many differential backups. So I do not see 
> the difference if I have two full backups on a device for a day or for 
> few hours, I need that space anyway. But I think this variant is better 
> to be used it with your original idea: Every full backup volume has its 
> own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
> do the immediate differential (practicaly zero sized) backup to the 
> volume 2 just after the full one and vice-versa in even weeks. 
> Priorities could help you as well in this case. May be some check if the 
> full backup was good would be advisable, but I am not sure if bacula can 
> do this kind of conditional job runs, may be with some python hacking or 
> some After Run and Before Run scripts.
> You can do the same for differential backups - two volumes in two pools, 
> the first is used and the other cleared - in turns.
> And finaly, you can combine it with previous solution and divide it to 
> sevenths or more parts, but then it would be the real Catalog hell.
> 
> 3) It is the worst solution. If you want to have bad sleep every Monday 
> (or else day), try it. It is realy risky to loose the backup even for a 
> while, an accident can strike at any time.
> 
> Marek
> 
> P.S. I could write it in czech, but the other readers can be interested 
> too :-)
> 
> Radek Hladik napsal(a):
>> Hi,
>>  thanks for your answer. Your idea sounds good. However if I understand 
>> it correctly, there will be two full backups for the whole day after 
>> full backup. This is what I am trying to avoid as I will be backing up a 
>> lot of clients. So as I see it I have these possibilities:
>>
>> 1) use your scheme and divide clients into seven groups. One group will 
>> start it's full backup on Monday, second on Tuesday, etd.. So I will 
>> have all the week two full backups for 1/7 clients. This really seems 
>> like I will need to backup the catalog at least dozen times because no 
>> one will be able to deduct which backup is on which volu

Re: [Bacula-users] Disk backup recycling

2007-10-24 Thread Marek Simon
My opinion to your ideas:
0) Leave the schema as I submited and buy more disk space for backuping. :-)

1) It is best variant I think. The other advantage is that the full 
backup of all clients would take much longer time then 1/7th full and 
other differential. Now what to do with Catalog:
You can backup the catalog to some changable media (tape, CD/DWD-RW).
You can pull the (zipped and may be encrypted) catalog to some or all of 
your clients.
You can send your (zipped and maybe encrypted) Catalog to some friend of 
you (and you can backup his catalog for reciprocation), but it may be a 
violence of the data privacy (even if the Catalog contain only names and 
sizes).
You can forward the bacula messages (completed backups) to some external 
mail address and then if needed you can reconstruct the job-volume 
binding from them.
The complete catalog is too big for sending it by e-mail, but still you 
can do SQL selection in catalog after the backup and send the job-volume 
bindings and some other relevant information to the external email 
address in CSV format.
Still you can (and I strongly recommend to) backup the catalog every 
time after the daily bunch of Jobs and extract it when needed with other 
bacula tools (bextract).

2) I thought, you are in lack of disk space, so you can't afford to have 
the full backup twice plus many differential backups. So I do not see 
the difference if I have two full backups on a device for a day or for 
few hours, I need that space anyway. But I think this variant is better 
to be used it with your original idea: Every full backup volume has its 
own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
do the immediate differential (practicaly zero sized) backup to the 
volume 2 just after the full one and vice-versa in even weeks. 
Priorities could help you as well in this case. May be some check if the 
full backup was good would be advisable, but I am not sure if bacula can 
do this kind of conditional job runs, may be with some python hacking or 
some After Run and Before Run scripts.
You can do the same for differential backups - two volumes in two pools, 
the first is used and the other cleared - in turns.
And finaly, you can combine it with previous solution and divide it to 
sevenths or more parts, but then it would be the real Catalog hell.

3) It is the worst solution. If you want to have bad sleep every Monday 
(or else day), try it. It is realy risky to loose the backup even for a 
while, an accident can strike at any time.

Marek

P.S. I could write it in czech, but the other readers can be interested 
too :-)

Radek Hladik napsal(a):
> Hi,
>   thanks for your answer. Your idea sounds good. However if I understand 
> it correctly, there will be two full backups for the whole day after 
> full backup. This is what I am trying to avoid as I will be backing up a 
> lot of clients. So as I see it I have these possibilities:
>
> 1) use your scheme and divide clients into seven groups. One group will 
> start it's full backup on Monday, second on Tuesday, etd.. So I will 
> have all the week two full backups for 1/7 clients. This really seems 
> like I will need to backup the catalog at least dozen times because no 
> one will be able to deduct which backup is on which volume :-)
> 2) modify your scheme as there will be another differential backup right 
> after the full backup before next job starts. It will effectively erase 
> the last week full backup.
> 3) use only 7 volumes and retention 6 days and live with the fact, that 
> there is no backup during backup.
>
> Now I only need to decide which option will be the best one :-)
>
> Radek
>
>   
>

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup recycling

2007-10-23 Thread Radek Hladik
Hi,
thanks for your answer. Your idea sounds good. However if I understand 
it correctly, there will be two full backups for the whole day after 
full backup. This is what I am trying to avoid as I will be backing up a 
lot of clients. So as I see it I have these possibilities:

1) use your scheme and divide clients into seven groups. One group will 
start it's full backup on Monday, second on Tuesday, etd.. So I will 
have all the week two full backups for 1/7 clients. This really seems 
like I will need to backup the catalog at least dozen times because no 
one will be able to deduct which backup is on which volume :-)
2) modify your scheme as there will be another differential backup right 
after the full backup before next job starts. It will effectively erase 
the last week full backup.
3) use only 7 volumes and retention 6 days and live with the fact, that 
there is no backup during backup.

Now I only need to decide which option will be the best one :-)

Radek


Marek Simon napsal(a):
> Hi,
> I suggest to solve like this:
> One Pool only. 8 volumes. Retention time 7 days and few hours. Use time 
> 1 day.
> 
> The use will be like this:
> first Monday: full backup. volume 1 used and contains the full backup
> Tue to Sun: diff backup using volumes 2 to 7 (automaticaly selected or 
> created by bacula)
> second Monday: volume 1 is not free yet, so using volume 8 for full 
> backup. Now you have two full backups.
> second Tuesday: volume 1 is available and bacula will recycle it for 
> first differencial backup. Old full backup is discarded. Now you have 
> full backup on volume 8, first diff on volume 1 and 6 volumes with 
> useless data
> second Wednesday: volume 2 is available etc.
> 
> You will not be able to keep the exact content of volumes in your head, 
> but the bacula is designed for not needing that. You can still read 
> every day's report and get your brain busy with it (and I recomand it 
> for few weeks to catch the buggies).
> 
> Marek
> 
> 
> Radek Hladik napsal(a):
>> Hi,
>>  I am implementing simple schema for backing up our data. Data are 
>> backed up to disk. Full backup is performed every Sunday. The next 6 
>> days differential backups are performed.
>> I want to keep only one Full backup and at maximum 6 daily differences. 
>> The moment new Full backup is made, previous Full backup and its 
>> "differential children" can be deleted. According to documentation 
>> (chapter Automated disk backup) I've made two pools like this:
>>
>> Pool
>> {
>>Name = full
>>Pool Type = Backup
>>Recycle = yes
>>AutoPrune = yes
>>Volume Retention = 7 days
>>Label format = full
>>Maximum Volume Jobs = 1
>>Maximum Volumes = 2
>> }
>>
>> Pool
>> {
>>Name = diff
>>Pool Type = Backup
>>Recycle = yes
>>AutoPrune = yes
>>Volume Retention = 7 days
>>Label format = diff
>>Maximum Volume Jobs = 1
>>Maximum Volumes = 8
>> }
>>
>> But as I understand it, there will be two full backups as backups have 
>> period 7 days and retention is 7 days too - so the last week backup will 
>> not be reused as it is not old enough. But if I lower retention to i.e. 
>> 6 days, the volume will be deleted before performing backup and there 
>> will be window without any backup.
>> I am backing up a lot of clients and I do not mind having one 
>> unnecessary backup during backup process itself but I would like to 
>> avoid having two backups for each client for whole the time.
>> And my other question - can be differential/incremental backups 
>> automatically deleted as soon as their "parent" is reused/deleted?
>>
>> Radek
>>
>>
>>
>> -
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>>   
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

Re: [Bacula-users] Disk backup recycling

2007-10-23 Thread Marek Simon
Hi,
I suggest to solve like this:
One Pool only. 8 volumes. Retention time 7 days and few hours. Use time 
1 day.

The use will be like this:
first Monday: full backup. volume 1 used and contains the full backup
Tue to Sun: diff backup using volumes 2 to 7 (automaticaly selected or 
created by bacula)
second Monday: volume 1 is not free yet, so using volume 8 for full 
backup. Now you have two full backups.
second Tuesday: volume 1 is available and bacula will recycle it for 
first differencial backup. Old full backup is discarded. Now you have 
full backup on volume 8, first diff on volume 1 and 6 volumes with 
useless data
second Wednesday: volume 2 is available etc.

You will not be able to keep the exact content of volumes in your head, 
but the bacula is designed for not needing that. You can still read 
every day's report and get your brain busy with it (and I recomand it 
for few weeks to catch the buggies).

Marek


Radek Hladik napsal(a):
> Hi,
>   I am implementing simple schema for backing up our data. Data are 
> backed up to disk. Full backup is performed every Sunday. The next 6 
> days differential backups are performed.
> I want to keep only one Full backup and at maximum 6 daily differences. 
> The moment new Full backup is made, previous Full backup and its 
> "differential children" can be deleted. According to documentation 
> (chapter Automated disk backup) I've made two pools like this:
>
> Pool
> {
>Name = full
>Pool Type = Backup
>Recycle = yes
>AutoPrune = yes
>Volume Retention = 7 days
>Label format = full
>Maximum Volume Jobs = 1
>Maximum Volumes = 2
> }
>
> Pool
> {
>Name = diff
>Pool Type = Backup
>Recycle = yes
>AutoPrune = yes
>Volume Retention = 7 days
>Label format = diff
>Maximum Volume Jobs = 1
>Maximum Volumes = 8
> }
>
> But as I understand it, there will be two full backups as backups have 
> period 7 days and retention is 7 days too - so the last week backup will 
> not be reused as it is not old enough. But if I lower retention to i.e. 
> 6 days, the volume will be deleted before performing backup and there 
> will be window without any backup.
> I am backing up a lot of clients and I do not mind having one 
> unnecessary backup during backup process itself but I would like to 
> avoid having two backups for each client for whole the time.
> And my other question - can be differential/incremental backups 
> automatically deleted as soon as their "parent" is reused/deleted?
>
> Radek
>
>
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>   

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk backup recycling

2007-10-21 Thread Radek Hladik
Hi,
I am implementing simple schema for backing up our data. Data are 
backed up to disk. Full backup is performed every Sunday. The next 6 
days differential backups are performed.
I want to keep only one Full backup and at maximum 6 daily differences. 
The moment new Full backup is made, previous Full backup and its 
"differential children" can be deleted. According to documentation 
(chapter Automated disk backup) I've made two pools like this:

Pool
{
   Name = full
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 7 days
   Label format = full
   Maximum Volume Jobs = 1
   Maximum Volumes = 2
}

Pool
{
   Name = diff
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 7 days
   Label format = diff
   Maximum Volume Jobs = 1
   Maximum Volumes = 8
}

But as I understand it, there will be two full backups as backups have 
period 7 days and retention is 7 days too - so the last week backup will 
not be reused as it is not old enough. But if I lower retention to i.e. 
6 days, the volume will be deleted before performing backup and there 
will be window without any backup.
I am backing up a lot of clients and I do not mind having one 
unnecessary backup during backup process itself but I would like to 
avoid having two backups for each client for whole the time.
And my other question - can be differential/incremental backups 
automatically deleted as soon as their "parent" is reused/deleted?

Radek



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup "without volumes"?

2007-04-02 Thread Kristian Rink
Hi Brenden, list;

and first off thanks a lot for all of your hints and inspirations on
that issue. 

["Brenden Phillips" <[EMAIL PROTECTED]> @ Mon, 2 Apr 2007
12:00:30 +1200]

> We currently only use bacula to backup our windows servers and the
> backup servers themselves. For all the *NIX boxes we use dirvish
> (www.dirvish.org) which is a rsync based set of perl scripts that
> sounds like what you want. It will maintain complete images of the

Indeed, this reads like a good recommendation; I'll have a look at it
and see how it works with our environment. Thanks again!

Best regards,
Kristian

-- 
Dipl.-Ing.(BA) Kristian Rink * Software- und Systemingenieur
planConnect GmbH  * Strehlener Str. 12 - 14 * 01069 Dresden
fon: 0351 4657770 * mail: [EMAIL PROTECTED] * http://www.pm-planc.de

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup "without volumes"?

2007-04-01 Thread Brenden Phillips
Hi

We currently only use bacula to backup our windows servers and the
backup servers themselves. For all the *NIX boxes we use dirvish
(www.dirvish.org) which is a rsync based set of perl scripts that sounds
like what you want. It will maintain complete images of the target
servers backup trees over time using the rsync hardlink feature.
Basically you do an initial complete rsync of the target and then
subsequent backups are done referencing the last good backup. If a file
hasn't changed, it adds a hardlink from the previous copy, if it has
changed, do a differential rsync using the previous copy as a reference
to only shift the changes, if it's new, copy over the file, if its
deleted don't put it in this tree.

I would use bacula for everything but I just don't have the room - the
use of hardlinks save lots of space:

Backing up 8 windows servers with bacula with 10 incremental daily's, 5
weekly differential's and 3 full monthly volumes uses 800GB. 

With dirvish, I have the last 15 full backups, the last month's 5 Friday
full backups and the last 3 months first of the month Friday full
backups for 73 servers using 2TB

YMMV of course

Cheers

Brenden


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Kristian Rink
Sent: Saturday, 31 March 2007 2:15 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] disk backup "without volumes"?


Folks;

maybe the subject sounds strange, nevertheless: In our environment, we
use to do backups (a) from several servers to a machine having a large
disk array attached using rsync and (b) from there to tape using
afio+wrapper-scripts. So far, this works well, and the reason for
making use of this technology is that people around here want to have
the backed-up repository available as a read-only SMB share just in case
it is ever needed (actually, it sometimes is).

After playing around with it for a while, I've grown to like bacula as a
distributed backup solution as it drastically eases maintaineance effort
taken to rsync files from all over to one central place.
However, using bacula to store a backup to disk always leaves me with a
backup folder containing bacula "volume files" which are inaccessible to
an arbitrary user (who shouldn't be meant to mess with the bconsole).

My question: Is there a way to set up the bacula-storage to dump files
to a disk into a file system structure that could be shared using SMB,
NFS, whatever? How can I achieve this effect, or is it not currently
supported / thought of? Any reading pointers on that?

Thanks in advance and bye,
Kristian

--
Dipl.-Ing.(BA) Kristian Rink * Software- und Systemingenieur planConnect
GmbH  * Strehlener Str. 12 - 14 * 01069 Dresden
fon: 0351 4657770 * mail: [EMAIL PROTECTED] * http://www.pm-planc.de


-
Take Surveys. Earn Cash. Influence the Future of IT Join
SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDE
V
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup "without volumes"?

2007-03-30 Thread Darien Hager
> My question: Is there a way to set up the bacula-storage to dump files
> to a disk into a file system structure that could be shared using SMB,
> NFS, whatever? How can I achieve this effect, or is it not currently
> supported / thought of? Any reading pointers on that?

I don't think it's possible to back up directly to a directory  
structure. That is, you can't "back up to folder hierarchy". (I mean,  
what happens to incrementals? Retention periods? etc.)

I think what you want to do can be done, but it involves restoring  
the backups after they are made into this SMB folder. (It may also be  
possible to do it with a cronjob and bacula's command-line tools like  
"bls" and "bextract".)

My guess would be that you can use the "Write bootstrap" directive  
for a Backup job to save the bootstrap (restoration settings) into a  
file. This will contain various things about what volume and byte  
offsets are needed, etc.

You can then have another scheduled Restore job which uses the  
"Bootstrap" directive too read the details, but whose "Client" and  
"Where" directives are set to your SMB server/folder. Then set the  
Schedule and Priority so that it will always run after the backup  
jobs occur. (Or run bextract and pass it the bootstrap, etc.)

--
--Darien A. Hager
[EMAIL PROTECTED]
Mobile: (206) 734-5666



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk backup "without volumes"?

2007-03-30 Thread Kristian Rink

Folks;

maybe the subject sounds strange, nevertheless: In our environment, we
use to do backups (a) from several servers to a machine having a large
disk array attached using rsync and (b) from there to tape using
afio+wrapper-scripts. So far, this works well, and the reason for
making use of this technology is that people around here want to have
the backed-up repository available as a read-only SMB share just in
case it is ever needed (actually, it sometimes is).

After playing around with it for a while, I've grown to like bacula as
a distributed backup solution as it drastically eases maintaineance
effort taken to rsync files from all over to one central place.
However, using bacula to store a backup to disk always leaves me with a
backup folder containing bacula "volume files" which are inaccessible
to an arbitrary user (who shouldn't be meant to mess with the bconsole).

My question: Is there a way to set up the bacula-storage to dump files
to a disk into a file system structure that could be shared using SMB,
NFS, whatever? How can I achieve this effect, or is it not currently
supported / thought of? Any reading pointers on that?

Thanks in advance and bye,
Kristian

-- 
Dipl.-Ing.(BA) Kristian Rink * Software- und Systemingenieur
planConnect GmbH  * Strehlener Str. 12 - 14 * 01069 Dresden
fon: 0351 4657770 * mail: [EMAIL PROTECTED] * http://www.pm-planc.de

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk backup and saturation

2006-02-21 Thread Steen . L . Meyer

My Pool section is here - it works and is inspired by the disk backup
example in the manual:


# Default pool definition
Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
  Accept Any Volume = yes # write on any volume in the pool
  Recycle Oldest Volume = yes
  Maximum Volume Bytes = 2 gb
  Volume Use Duration = 10 days
  LabelFormat = "Def"
}

This one keeps it from growing:
  Maximum Volume Bytes = 2 gb

And this one makes bacula recycle the oldes ones:
  Recycle Oldest Volume = yes

regards

Steen




   
  le dahut  
   
  <[EMAIL PROTECTED]>To:
bacula-users@lists.sourceforge.net  
  Sent by:  cc: 
   
  [EMAIL PROTECTED]     Subject:     [Bacula-users] 
disk backup and saturation 
  ceforge.net   
   

   
  E-mail:   
   
  [EMAIL PROTECTED] 
   
  ceforge.net   
   

   

   
  21/02/2006 09:51  
   

   

   



Hello,

I'm trying to install a backup on disk with Bacula. I'm experiencing
some problems, my volume grows, grows and grows until it fills up the
whole destination disk and then I get an error during backup. Here are
my configuration parameters :

Client {
   Name = serv1-fd
   Address = 10.121.11.4
   FDPort = 9102
   Catalog = MyCatalog
   Password = "XXX"
   File Retention = 40 days
   Job Retention = 6 months
   AutoPrune = yes
}

Pool {
   Name = serv1-pool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 40 days
   Accept Any Volume = yes
   LabelFormat = serv-pedago
}

I'm using bacula-sqlite-1.36.3.

What should I do so that the old jobs get really erased from the volume
and don't take place on my backup disk anymore ?

K.



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log
files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users






---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk backup and saturation

2006-02-21 Thread le dahut

Hello,

I'm trying to install a backup on disk with Bacula. I'm experiencing 
some problems, my volume grows, grows and grows until it fills up the 
whole destination disk and then I get an error during backup. Here are 
my configuration parameters :


Client {
  Name = serv1-fd
  Address = 10.121.11.4
  FDPort = 9102
  Catalog = MyCatalog
  Password = "XXX"
  File Retention = 40 days
  Job Retention = 6 months
  AutoPrune = yes
}

Pool {
  Name = serv1-pool
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 40 days
  Accept Any Volume = yes
  LabelFormat = serv-pedago
}

I'm using bacula-sqlite-1.36.3.

What should I do so that the old jobs get really erased from the volume 
and don't take place on my backup disk anymore ?


K.



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk Backup Strategy Advice

2006-01-31 Thread Arno Lehmann

Hello,

On 1/31/2006 6:42 PM, Brad Pinkston wrote:



I have 1.2TB worth of space to back up 25 servers.  I’d like to do three 
weeks of backups.


Well, at least you have to give some more information: How much data 
does a full backup consist of today, how many generations of backups do 
you need, how many do you want, do you want to store volumes off-site, 
can volume management be done daily, weekly, or never at all... and, 
very important, how much of your data changes daily, and how fast does 
the total data volume grow?


 The director and storage are currently running on the 
same machine.  I’d like to set this up to do concurrent backups.


Depending on the machine and, quite important, if the catalog database 
is on that machine, too... usually not a problem.


 I 
apologize for leaving this so vague, but want to start as much 
conversation as possible so I can consider everyone’s input.  I’m very 
open to changing everything that is setup.


Before we can suggest or even discuss changes we'll probably need a 
little more information about your setup.


For example: I use a very old server with a DLT autoloader, I have tapes 
holding about 1.5 TB, I back up 6 machines, I keep backups for a year, 
and I run the catalog database on a separate server. I can run up to 
five or six jobs simultaneously. Not very interesting - you'd need to 
know much more about my backup schedules and the amount of data I 
collect from the different machines.


Arno

 


Thanks in advance

 


* Brad Pinkston
Sr. Linux Systems Administrator
* E: [EMAIL PROTECTED] 
O: 214.206.3485
M: 469.682.6487
F: 303.496.2712
http://support.newmediagateway.com 

 



--
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid3432&bid#0486&dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk Backup Strategy Advice

2006-01-31 Thread Brad Pinkston








I have 1.2TB worth of space to back up 25 servers.  I’d
like to do three weeks of backups.  The director and storage are currently
running on the same machine.  I’d like to set this up to do concurrent
backups.  I apologize for leaving this so vague, but want to start as much
conversation as possible so I can consider everyone’s input.  I’m
very open to changing everything that is setup.

 

Thanks in advance

 

Brad Pinkston
Sr. Linux Systems Administrator
E: [EMAIL PROTECTED]
O: 214.206.3485
M: 469.682.6487
F: 303.496.2712
http://support.newmediagateway.com