Re: [Bacula-users] Tape Library and Cleaning Requests?

2013-01-17 Thread Jummo
Hi Frank,

On Wed, 16 Jan 2013, f.staed...@dafuer.de wrote:
 just another question I stumbled upon. While doing more testing with a
 tape library and my rather large backup sets I came across a new
 question. A single full backup of one machine is about 30 TB and while a
 full backup is running the libraray requests drive cleaning (I called
 support and they told me the tape drive will request cleaning on a
 regular basis of TBs written to the tape along with start/stop cycles).
 

Maybe your tape library has a option to load the cleaning tape automatically. 
My tape library has such a feature and I configured Bacula not loading tapes 
with the CLN prefix, as Adrian already mentioned. 

My experience so far: If a cleaning requests occurs during a running job, the 
tape library unloads the current tape, loads the cleaning tape and after 
cleaning loads the previous tape. The Bacula job is on hold and wait for the 
Storage Device to get ready again.

Best Regards,
Patrick

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] save DB's online

2013-01-17 Thread Sven Gehr
Hi@all,

is it possible to backu databases e.g. mysql, pgsql (on other hosts) 
with bacula online?

-- 
Viele Grüsse

Sven Gehr

-- 
Viele Grüsse

Sven Gehr

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] parallelizing ClientRunBeforeJobs

2013-01-17 Thread Tilman Schmidt
One of my Bacula servers backs up 18 clients. Many of these
have a lengthy ClientRunBeforeJob. (SVN hot backups, database
dumps, Windows system state backups - you get the picture.)
As a consequence, more than half of the elapsed time of my
nightly backup runs is actually spent waiting for one of the
clients to complete its Before job while everything else sits
idle.

How are others handling this situation? Any smart ideas how
to get the before jobs to run in parallel, preferably keeping
the actual writing to tape sequential?

aTdHvAaNnKcSe
Tilman

-- 
Tilman Schmidt
Phoenix Software GmbH
Bonn, Germany




signature.asc
Description: OpenPGP digital signature
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread Uwe Schuerkamp
On Thu, Jan 17, 2013 at 10:40:26AM +0100, Sven Gehr wrote:
 Hi@all,
 
 is it possible to backu databases e.g. mysql, pgsql (on other hosts) 
 with bacula online?
 
 -- 
 Viele Grüsse
 
 Sven Gehr
 


Yes and no. If there are no jobs running you can set the db to read
only, but bacula will barf the next time it tries to insert something
into the tables. 

A frequently quoted method is creating an lvm snapshot and using a
tool like mydumper to create the backup. I don't know about postgres
as we're using mariadb exclusively with bacula ATM. 

Cheers, Uwe 

-- 
NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing ClientRunBeforeJobs

2013-01-17 Thread Radosław Korzeniewski
Hello,

2013/1/17 Tilman Schmidt t.schm...@phoenixsoftware.de

 One of my Bacula servers backs up 18 clients. Many of these
 have a lengthy ClientRunBeforeJob. (SVN hot backups, database
 dumps, Windows system state backups - you get the picture.)
 As a consequence, more than half of the elapsed time of my
 nightly backup runs is actually spent waiting for one of the
 clients to complete its Before job while everything else sits
 idle.

 How are others handling this situation? Any smart ideas how
 to get the before jobs to run in parallel, preferably keeping
 the actual writing to tape sequential?


Run all Jobs in parallel and use Spooling Data. It can require all Jobs use
the same pool if you have only one tape drive.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing ClientRunBeforeJobs

2013-01-17 Thread lst_hoe02

Zitat von Tilman Schmidt t.schm...@phoenixsoftware.de:

 One of my Bacula servers backs up 18 clients. Many of these
 have a lengthy ClientRunBeforeJob. (SVN hot backups, database
 dumps, Windows system state backups - you get the picture.)
 As a consequence, more than half of the elapsed time of my
 nightly backup runs is actually spent waiting for one of the
 clients to complete its Before job while everything else sits
 idle.

 How are others handling this situation? Any smart ideas how
 to get the before jobs to run in parallel, preferably keeping
 the actual writing to tape sequential?

Hello

we use the spooling feature and run all *jobs* concurrently. With this  
you get adjustable sized blocks written to tape sequential to  
maximize throughput and concurrent spooling for all other jobs. Works  
trouble free but you need a really fast spool device for anything  
faster than LTO-3.

Regards

Andreas



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread lst_hoe02

Zitat von Sven Gehr mailingli...@dreampixel.de:

 Hi@all,

 is it possible to backu databases e.g. mysql, pgsql (on other hosts)
 with bacula online?

There are many different possibilities:

- Use the dump utility of the DB to get a consistent dump to backup  
maybe with a ClientRunBeforeJob

- Use the DB provided possibillities to set the DB in Backup-Mode  
and let Bacula simply save the files

- Use the Enterprise Plugins  
(http://www.baculasystems.com/products/bacula-enterprise-plugins/postgresql-plugin)
 which provide one or both of the above in an easy  
way

Regards

Andreas



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Data encryption fails if backed up from fifo

2013-01-17 Thread lst_hoe02

Zitat von Axel Rau axel@chaos1.de:

 Am 15.01.2013 um 23:05 schrieb Axel Rau:

 Files backed up with
  readfifo = yes
 seem to be backed up fine:
 ---
  Software Compression:   None
  VSS:no
  Encryption: yes
  Accurate:   no
  Volume name(s): DB1-DB-DAILY-0120
  Volume Session Id:  15
  Volume Session Time:1358198139
  Last Volume Bytes:  15,341,071,607 (15.34 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK
 ---
 Trying to restore them fails with following err:
 ---
 15-Jan 22:21 db1-fd JobId 2492: Error: restore.c:772 \
 Missing encryption session data stream for  
 /usr/local/pgsql/bacula_backup/fifo/development.data.dump
 ---
 No problems to backup and restore flat files with data encryption.
 This is bacula 5.2.12 (12Sep12). Tested with clients on FreeBSD 8.2 and 9.1.

 If nobody has any comments on this, I will file a bug report.
 Axel

Yup, would be best. We use Data encryption but no FIFO so no comment  
from our side.

Regards

Andreas



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing ClientRunBeforeJobs

2013-01-17 Thread Dan Langille
On 2013-01-17 05:34, lst_ho...@kwsoft.de wrote:
 Zitat von Tilman Schmidt t.schm...@phoenixsoftware.de:

 One of my Bacula servers backs up 18 clients. Many of these
 have a lengthy ClientRunBeforeJob. (SVN hot backups, database
 dumps, Windows system state backups - you get the picture.)
 As a consequence, more than half of the elapsed time of my
 nightly backup runs is actually spent waiting for one of the
 clients to complete its Before job while everything else sits
 idle.

 How are others handling this situation? Any smart ideas how
 to get the before jobs to run in parallel, preferably keeping
 the actual writing to tape sequential?

 Hello

 we use the spooling feature and run all *jobs* concurrently. With 
 this
 you get adjustable sized blocks written to tape sequential to
 maximize throughput and concurrent spooling for all other jobs. Works
 trouble free but you need a really fast spool device for anything
 faster than LTO-3.

I'm not sure how useful this is.  But perhaps admin jobs will be hepful
for this.  They aren't run before, but they don't backup, they just 
run.

-- 
Dan Langille - http://langille.org/

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reporting on previous jobs

2013-01-17 Thread Jérôme Blion
Le 2013-01-16 20:33, Jack Cobb a écrit :
 We are using Bacula 5.0.3 on an Ubuntu 10.04 server using MySQL as
 the database engine. We review each day's backup to verify it
 completed but now our auditors are asking for a report that shows the
 backup job results for the previous twelve months...which is the
 amount of time we keep our backup history.

 Has anyone ever generated a report using the Bacula history and if so
 what tools did you use? Thanks.

 Jack Cobb

 MIS Department

 Skyline Corporation

 574.294.6521 x.362

 jc...@skylinecorp.com


Hello,

I think you should have a look on reportula, webacula, bacula-web.
They can help you to get the data pieces you will have to provide to 
auditors.
Good luck (I had to do it for SOX Compliancy)

HTH/
Jérôme Blion.

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread Phil Stracchino
On 01/17/13 04:55, Uwe Schuerkamp wrote:
 On Thu, Jan 17, 2013 at 10:40:26AM +0100, Sven Gehr wrote:
 Hi@all,

 is it possible to backu databases e.g. mysql, pgsql (on other hosts) 
 with bacula online?
 
 
 Yes and no. If there are no jobs running you can set the db to read
 only, but bacula will barf the next time it tries to insert something
 into the tables. 
 
 A frequently quoted method is creating an lvm snapshot and using a
 tool like mydumper to create the backup. I don't know about postgres
 as we're using mariadb exclusively with bacula ATM. 

This seems a little of a confused mixture.

First, yes, you totally can back up MySQL DBs online, provided you do it
correctly.  Correctly means, in general, one of two things:  a
transactional backup or a snapshot.

A transactional backup can be done with any of several tools --
mysqldump, mydumper, Percona XtraBackup, MySQL Enterprise Backup -- *as
long as you are using InnoDB tables*.  (And at this point in time,
unless you're using one of the small handful of MyISAM table features
not yet supported by InnoDB, you have no excuse for NOT using all
InnoDB.)  If you're using mysqldump, which is old and at this point
pretty much the village idiot of MySQL backup tools, you'll need to use
--single-transaction --skip-lock-tables when running it.  The other
tools mentioned will automatically just Do The Right Thing.

For a snapshot backup, you can issue a FLUSH TABLES WITH READ LOCK to
quiesce all of the MyISAM tables, wait for it to return, snapshot the
data directory, release the lock, and then mount the snapshot and back
it up.  We have found at my company that LVM snapshots actually do not
work very well for this purpose, because they are too slow and require
too much disk space.  On the other hand, ZFS snapshots work extremely
well, as they are virtually instant and require no reserved disk space.
 If you have to restore, it will be fast compared to reloading a dump,
but you will have to do an InnoDB recovery, so make sure you back up
both binary logs (if any) and InnoDB write-ahead logs.

Either way, you do not back up the live data.  Trying to do that is a
waste of time, because your backup will be inconsistent, because the
database will be changing as you back it up.  There's no point in
backing up the live data files.  Don't bother to do it.  It's a waste of
time and space.  Perform a consistent transactional dump and back up the
dump, or perform a snapshot and back up the snapshot.

PostgreSQL has a tool called pg_dumpall that is conceptually similar to
mysqldump and mydumper.


One last footnote:  *SOLELY* setting MySQL read-only does NOT guarantee
a consistent backup.  You must FLUSH TABLES, and even then you're still
not 100% safe on InnoDB.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread Uwe Schuerkamp
On Thu, Jan 17, 2013 at 08:20:48AM -0500, Phil Stracchino wrote:
 One last footnote:  *SOLELY* setting MySQL read-only does NOT guarantee
 a consistent backup.  You must FLUSH TABLES, and even then you're still
 not 100% safe on InnoDB.
 

Agreed, I forgot that important step in my previous email. Our method
right now involves flushing the tables, setting the read lock,
rsyncing the data files to a separate directory, then enabling write
access to the db while tarring / lzop'ing the new directory in the
background. 

Naturally we only do this when there are no backup jobs running which
is easy to determine using a stat dir echo'ed to bconsole before the
database backup starts. 

Could we expect to see better db performance by moving to innodb or
one of MariaDB's fancy new backends? I'm especially interested in
improving volume recycle times which can be quite long with our setup
(200GB file table). the DELETE from File where JobId in (.) can
take a few hours sometimes.  

Cheers, Uwe 



-- 
NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula tape rotation

2013-01-17 Thread dubnik
Hi 

I have one question abou backup plan.

I have 500GB web partition I want to backup, I have IBM TS3200 Library and two  
800GB Tapes

My plan is that Sunday at 2 a.m. I will make full backup and then every day 
Incremental at 2.a.m at first tape.
Next Sunday I will make again full backup on next tape a and then every day 
incremental  on the second tape.
And this will repeat.

Can you please tell me how to rotate this tapes (prun, label etc)
I have no idea how to do this.

Thanx

+--
|This was sent by dub...@atlas.sk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reporting on previous jobs

2013-01-17 Thread Jack Cobb
Bill,

Thanks for the information.  Using your ideas and with help from one of our
MySQL programmers we were able to create a script that will work for us.

Jack



On 01/16/13 14:33, Jack Cobb wrote:
 We are using Bacula 5.0.3 on an Ubuntu 10.04 server using MySQL as the 
 database engine.  We review each day's backup to verify it completed 
 but now our auditors are asking for a report that shows the backup job 
 results for the previous twelve months...which is the amount of time we
keep our backup history.
 
  
 
 Has anyone ever generated a report using the Bacula history and if so 
 what tools did you use?  Thanks.

Hi Jack... couldn't you just use the list jobs bconsole output?

It is formatted reasonably enough as-is, or you could script something to
convert it into CSV format which could be pulled into a spreadsheet.

A quick (and very dirty) combination of bconsole, grep and sed might do the
trick like so:

echo list jobs | bconsole -c /etc/bacula/bconsole.conf \
 | grep ^| | sed  -e 's/|//g' -e 's/,//g' -e 's/^ \+//' \
 -e 's/ \+/ /g' -e 's/ \+$//' -e 's/ /,/g'


Which outputs:

--[snip]--
18966,SpeedyFull,2013-01-15,20:30:21,B,I,23614,1816360338,T
18967,SpeedyMusic,2013-01-15,20:30:21,B,I,0,0,T
18969,Voip,2013-01-15,20:30:21,B,I,152,1174965049,T
18980,Voip,2013-01-15,20:30:21,C,I,152,1174982569,T
18970,Satch,2013-01-15,20:30:23,B,I,84,1311771664,T
18965,NewbyFull,2013-01-15,20:30:26,B,I,1513,812848531,T
18971,Zimbra,2013-01-16,02:45:00,B,F,387775,16323518664,T
18972,Helpdesk,2013-01-16,02:46:58,B,F,182057,9736122612,T
18973,Newby_MustHave,2013-01-16,04:00:05,B,F,22914,6787198696,T
18974,Catalog,2013-01-16,04:04:36,B,F,9,11054415984,T
--[snip]--


Of course, all of our Job Names contain no spaces so the above output does
not insert double quotes, but that is easy enough to fix if you have spaces
in your job names.

Hope this helps

--
Bill Arlofski
Reverse Polarity, LLC


--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery and
much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread Phil Stracchino
On 01/17/13 08:33, Uwe Schuerkamp wrote:
 Could we expect to see better db performance by moving to innodb or
 one of MariaDB's fancy new backends? I'm especially interested in
 improving volume recycle times which can be quite long with our setup
 (200GB file table). the DELETE from File where JobId in (.) can
 take a few hours sometimes.  

InnoDB in general performs considerably better than MyISAM, especially
on current MySQL branches.  Percona claims that their XtraDB engine has
a slight performance edge over InnoDB.  I don't have any direct
experience with MariaDB yet.

MyISAM can perform reasonably in an almost-all-read situation, but on
any modern-scale DB, as soon as you start throwing any significant
percentage of writes into the query mix MyISAM performance completely
tanks.  I've seen all-MyISAM customer DBs at a complete standstill from
write-contention bottlenecks.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] save DB's online

2013-01-17 Thread Uwe Schuerkamp
On Thu, Jan 17, 2013 at 08:52:32AM -0500, Phil Stracchino wrote:
 On 01/17/13 08:33, Uwe Schuerkamp wrote:
  Could we expect to see better db performance by moving to innodb or
  one of MariaDB's fancy new backends? I'm especially interested in
  improving volume recycle times which can be quite long with our setup
  (200GB file table). the DELETE from File where JobId in (.) can
  take a few hours sometimes.  
 
 InnoDB in general performs considerably better than MyISAM, especially
 on current MySQL branches.  Percona claims that their XtraDB engine has
 a slight performance edge over InnoDB.  I don't have any direct
 experience with MariaDB yet.
 
 MyISAM can perform reasonably in an almost-all-read situation, but on
 any modern-scale DB, as soon as you start throwing any significant
 percentage of writes into the query mix MyISAM performance completely
 tanks.  I've seen all-MyISAM customer DBs at a complete standstill from
 write-contention bottlenecks.
 


Well, I guess I'll have to stop backups for a week then and migrate
the db over to innodb... or find some clever way involving a myisam master
/ innodb slave setup. I'll do some reading on the web, thanks! 

Uwe 

-- 
NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client side FS detection

2013-01-17 Thread brconflict
So I managed a bare-metal restore, but still requires either
static-bacula-fd or bacula-fd with library tools. Compiling bacula-fd
withis --disable-libtool gives me a No such file or directory, which
indicates that there is a missing library.

So, I know now that the issue I'm having is simply with GZIP. Bacula gives
us the ability to back up directories (even /) under GZIP, but if I try
to bare-metal restore, the error is that GZIP is not supported on this
client.

Is there a way to compile static-bacula-fd with GZIP ability?

Thanks!!

On Wed, Jan 16, 2013 at 4:50 AM, Florian Heigl florian.he...@gmail.comwrote:

 Hi all,

 I'm aware this must be sort of an FAQ. I just failed to find good
 examples in my searches.
 What I'm looking for is how others solved listing the filesystems on a
 client using a client-side command that is configured on the dir.

 My goal is to backup everything local on a system, but the standard
 example of looking for i.e. mounts from hda and hdb is not good.
 For the sake of an example, lets say the individual servers could be
 running FreeBSD or Linux and could be using udev to rename their
 disks, or could be attaching more using iSCSI.
 One idea I can think of is using a list of filesystem types that matter.
 That way you can handle most things and also exclude cluster
 filesystems like ocfs2 that should best be backed up with a different
 job and separate fd.

 On the other hand this idea might break if someone uses an esoteric
 zbcdfs which i'm not expecting in my list of good filesystems.

 How have you gone about solving this?


 Florian

 --
 the purpose of libvirt is to provide an abstraction layer hiding all
 xen features added since 2006 until they were finally understood and
 copied by the kvm devs.


 --
 Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
 and much more. Keep your Java skills current with LearnJavaNow -
 200+ hours of step-by-step video tutorials by Java experts.
 SALE $49.99 this month only -- learn more at:
 http://p.sf.net/sfu/learnmore_122612
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Catastrophic overflow block problems

2013-01-17 Thread Ruth Ivimey-Cook

Hi,

I am sometimes getting these errors in my bacula backups:

Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0

and it is more likely on the larger volume backups. It seemingly results 
from bacula trying to write an additional block to a disk drive that is 
already 100% full. How can I stop bacula from believing this is a valid 
thing to do?


Background:  I have bacula setup on my local network to backup a file 
server and a number of workstations. The file server is also the bacula 
director and is running Fedora 15 and 
bacula-common-5.0.3-28.fc15.x86_64. Bacula is writing backups to an 
iSCSI disk group (not array) over ethernet; there are 6 disks of 1TB to 
2TB size and these are managed using vchanger 0.8.6, with 6 magazines 
each with 24 virtual volumes. The file server has 3.5TB of files and 
other workstations add about another 1TB.


More-complete log:

   17-Jan 14:49 helva-sd JobId 3417: Recycled volume DiskPool1_0006_0017 on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0), all previous data lost.
   17-Jan 14:49 helva-sd JobId 3417: New volume DiskPool1_0006_0017 mounted on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0) at 17-Jan-2013 14:49.
   17-Jan 14:49 helva-sd JobId 3417: End of Volume DiskPool1_0006_0017 at 0:216 on 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). Write of 64512 bytes 
got 3879.
   17-Jan 14:49 helva-sd JobId 3417: End of medium on Volume 
DiskPool1_0006_0017 Bytes=217 Blocks=0 at 17-Jan-2013 14:49.
   17-Jan 14:49 helva-sd JobId 3417: 3307 Issuing autochanger unload slot 89, drive 
0 command.
   17-Jan 14:49 helva-dir JobId 3417: Using Volume DiskPool1_0006_0018 from 
'Scratch' pool.
   17-Jan 14:49 helva-sd JobId 3417: 3301 Issuing autochanger loaded? drive 0 
command.
   17-Jan 14:49 helva-sd JobId 3417: 3302 Autochanger loaded? drive 0, 
result: nothing loaded.
   17-Jan 14:49 helva-sd JobId 3417: 3304 Issuing autochanger load slot 90, drive 
0 command.
   17-Jan 14:49 helva-sd JobId 3417: 3305 Autochanger load slot 90, drive 0, 
status is OK.
   17-Jan 14:49 helva-sd JobId 3417: Recycled volume DiskPool1_0006_0018 on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0), all previous data lost.
   17-Jan 14:49 helva-sd JobId 3417: New volume DiskPool1_0006_0018 mounted on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0) at 17-Jan-2013 14:49.
   17-Jan 14:49 helva-sd JobId 3417: End of Volume DiskPool1_0006_0018 at 0:216 on 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). Write of 64512 bytes 
got 3879.
   17-Jan 14:49 helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot 
write overflow block to device DiskStorage-drive-0 
(/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-fd 
JobId 3417: Error: bsock.c:393 Write error sending 65562 bytes to Storage 
daemon:helva.cam.ivimey.org:9103: ERR=Connection reset by peer
   17-Jan 14:49 helva-fd JobId 3417: Fatal error: backup.c:1024 Network send 
error to SD. ERR=Connection reset by peer
   17-Jan 14:49 helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 
helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Job write elapsed time = 14:30:55, Transfer rate = 12.06 M Bytes/second
   17-Jan 14:49 helva-dir JobId 3417: Error: Bacula helva-dir 5.0.3 (04Aug10): 
17-Jan-2013 14:49:32
  Build OS:   x86_64-redhat-linux-gnu redhat
  JobId:  3417
  Job:Helva_Home.2013-01-17_00.17.26_23
  Backup Level:   Full
  Client: helva-fd 5.0.3 (04Aug10) 
x86_64-redhat-linux-gnu,redhat,
  FileSet:Home 2010-12-07 13:37:32
  Pool:   Normal-Full-18w (From Job FullPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:DiskStorage (From command line)
  Scheduled time: 17-Jan-2013 00:17:26
  Start time: 17-Jan-2013 00:17:28
  End time:   17-Jan-2013 14:49:32
  Elapsed time:   14 hours 32 mins 4 secs
  Priority:   12
  FD Files Written:   162,621
  SD 

Re: [Bacula-users] Client side FS detection

2013-01-17 Thread Novosielski, Ryan
Please don't thread hijack. I imagine this is an accident, but it makes things 
very confusing when the content matches some other subject line.



From: brconflict [mailto:brconfl...@gmail.com]
Sent: Thursday, January 17, 2013 09:51 AM
To: Florian Heigl florian.he...@gmail.com
Cc: bacula-users@lists.sourceforge.net bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Client side FS detection

So I managed a bare-metal restore, but still requires either static-bacula-fd 
or bacula-fd with library tools. Compiling bacula-fd withis --disable-libtool 
gives me a No such file or directory, which indicates that there is a missing 
library.

So, I know now that the issue I'm having is simply with GZIP. Bacula gives us 
the ability to back up directories (even /) under GZIP, but if I try to 
bare-metal restore, the error is that GZIP is not supported on this client.

Is there a way to compile static-bacula-fd with GZIP ability?

Thanks!!

On Wed, Jan 16, 2013 at 4:50 AM, Florian Heigl 
florian.he...@gmail.commailto:florian.he...@gmail.com wrote:
Hi all,

I'm aware this must be sort of an FAQ. I just failed to find good
examples in my searches.
What I'm looking for is how others solved listing the filesystems on a
client using a client-side command that is configured on the dir.

My goal is to backup everything local on a system, but the standard
example of looking for i.e. mounts from hda and hdb is not good.
For the sake of an example, lets say the individual servers could be
running FreeBSD or Linux and could be using udev to rename their
disks, or could be attaching more using iSCSI.
One idea I can think of is using a list of filesystem types that matter.
That way you can handle most things and also exclude cluster
filesystems like ocfs2 that should best be backed up with a different
job and separate fd.

On the other hand this idea might break if someone uses an esoteric
zbcdfs which i'm not expecting in my list of good filesystems.

How have you gone about solving this?


Florian

--
the purpose of libvirt is to provide an abstraction layer hiding all
xen features added since 2006 until they were finally understood and
copied by the kvm devs.

--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic overflow block problems

2013-01-17 Thread Josh Fisher

On 1/17/2013 11:06 AM, Ruth Ivimey-Cook wrote:

Hi,

I am sometimes getting these errors in my bacula backups:
Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0
and it is more likely on the larger volume backups. It seemingly 
results from bacula trying to write an additional block to a disk 
drive that is already 100% full. How can I stop bacula from believing 
this is a valid thing to do?




You don't. The trick is to define a maximum volume size and number of 
volumes on the drive so that it is impossible to reach 100% of the 
physical drive's capacity. This will prevent the i/o error, and Bacula 
will instead hit end of volume and seek another volume. Of course, if no 
existing volumes can be recycled yet, then there simply isn't enough 
space on the drive. In that case, it is easy to add another drive to an 
existing autochanger, since vchanger allows for multiple simultaneous 
magazine drives.


Background:  I have bacula setup on my local network to backup a file 
server and a number of workstations. The file server is also the 
bacula director and is running Fedora 15 and 
bacula-common-5.0.3-28.fc15.x86_64. Bacula is writing backups to an 
iSCSI disk group (not array) over ethernet; there are 6 disks of 1TB 
to 2TB size and these are managed using vchanger 0.8.6, with 6 
magazines each with 24 virtual volumes. The file server has 3.5TB of 
files and other workstations add about another 1TB.


More-complete log:

17-Jan 14:49 helva-sd JobId 3417: Recycled volume DiskPool1_0006_0017 on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0), all previous data lost.
17-Jan 14:49 helva-sd JobId 3417: New volume DiskPool1_0006_0017 mounted on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0) at 17-Jan-2013 14:49.
17-Jan 14:49 helva-sd JobId 3417: End of Volume DiskPool1_0006_0017 at 0:216 on 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). Write of 64512 bytes 
got 3879.
17-Jan 14:49 helva-sd JobId 3417: End of medium on Volume 
DiskPool1_0006_0017 Bytes=217 Blocks=0 at 17-Jan-2013 14:49.
17-Jan 14:49 helva-sd JobId 3417: 3307 Issuing autochanger unload slot 89, 
drive 0 command.
17-Jan 14:49 helva-dir JobId 3417: Using Volume DiskPool1_0006_0018 from 
'Scratch' pool.
17-Jan 14:49 helva-sd JobId 3417: 3301 Issuing autochanger loaded? drive 
0 command.
17-Jan 14:49 helva-sd JobId 3417: 3302 Autochanger loaded? drive 0, 
result: nothing loaded.
17-Jan 14:49 helva-sd JobId 3417: 3304 Issuing autochanger load slot 90, drive 
0 command.
17-Jan 14:49 helva-sd JobId 3417: 3305 Autochanger load slot 90, drive 0, 
status is OK.
17-Jan 14:49 helva-sd JobId 3417: Recycled volume DiskPool1_0006_0018 on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0), all previous data lost.
17-Jan 14:49 helva-sd JobId 3417: New volume DiskPool1_0006_0018 mounted on device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0) at 17-Jan-2013 14:49.
17-Jan 14:49 helva-sd JobId 3417: End of Volume DiskPool1_0006_0018 at 0:216 on 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). Write of 64512 bytes 
got 3879.
17-Jan 14:49 helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. 
Cannot write overflow block to device DiskStorage-drive-0 
(/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-fd 
JobId 3417: Error: bsock.c:393 Write error sending 65562 bytes to Storage 
daemon:helva.cam.ivimey.org:9103: ERR=Connection reset by peer
17-Jan 14:49 helva-fd JobId 3417: Fatal error: backup.c:1024 Network send 
error to SD. ERR=Connection reset by peer
17-Jan 14:49 helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to 
device DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 
helva-sd JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Fatal error: device.c:192 Catastrophic error. Cannot write overflow block to device 
DiskStorage-drive-0 (/var/spool/bacula/vchanger/0/drive0). ERR=No space left on device17-Jan 14:49 helva-sd 
JobId 3417: Job write elapsed time = 14:30:55, Transfer rate = 12.06 M Bytes/second
17-Jan 14:49 helva-dir JobId 3417: Error: Bacula helva-dir 5.0.3 (04Aug10): 
17-Jan-2013 14:49:32
   Build OS:   x86_64-redhat-linux-gnu redhat
   JobId:  3417
   Job:Helva_Home.2013-01-17_00.17.26_23
   Backup Level:   Full
   Client: helva-fd 

Re: [Bacula-users] bacula tape rotation

2013-01-17 Thread John Drescher
 I have one question abou backup plan.

 I have 500GB web partition I want to backup, I have IBM TS3200 Library and 
 two  800GB Tapes

 My plan is that Sunday at 2 a.m. I will make full backup and then every day 
 Incremental at 2.a.m at first tape.
 Next Sunday I will make again full backup on next tape a and then every day 
 incremental  on the second tape.
 And this will repeat.

 Can you please tell me how to rotate this tapes (prun, label etc)
 I have no idea how to do this.


You can somewhat control this using the settings in your pool resource
to limit the volume retention. Or totally control this by using
separate pools for the first and second set and using the schedule to
explicitly say what pool to use on what week (even / odd ...).

John

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic overflow block problems

2013-01-17 Thread Dan Langille
On 2013-01-17 11:06, Ruth Ivimey-Cook wrote:
 Hi,

  I am sometimes getting these errors in my bacula backups:

 Fatal error: device.c:192 Catastrophic error. Cannot write overflow
 block to device DiskStorage-drive-0
  and it is more likely on the larger volume backups. It seemingly
 results from bacula trying to write an additional block to a disk
 drive that is already 100% full. How can I stop bacula from believing
 this is a valid thing to do?

Disk space is outside the scope of the Bacula project. It is the 
responsibility
of the sysadmin to manage disk space.

The other post mentioned how to restrict a Pool to a maximum size per 
Volume
and a maximum number of Volume per Pool.

-- 
Dan Langille - http://langille.org/

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Best way to force a volume to recycle so it can be used immediately

2013-01-17 Thread Craig Isdahl

All -

I goofed on a config change and have multiple copies of 20gb data 
backups - I don't need all that so I'd like to recycle them immediately 
rather than waiting for auto recycling to happen (90 days). All backups 
are to disk volumes.  What's the best way to do that?  If I mark them 
'used will that work?  If not do I need to purge them?


Version: 5.0.0


Thanks in advance!
-- Craig
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to force a volume to recycle so it can be used immediately

2013-01-17 Thread Novosielski, Ryan
Any drawback to a purge right now? None I can think of if you don't need the 
data. Marking them used will respect the retention policy which it sounds like 
you don't want.



From: Craig Isdahl [mailto:cr...@isdahl.com]
Sent: Thursday, January 17, 2013 03:36 PM
To: Bacula-users@lists.sourceforge.net Bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Best way to force a volume to recycle so it can be used 
immediately

All -

I goofed on a config change and have multiple copies of 20gb data backups - I 
don't need all that so I'd like to recycle them immediately rather than waiting 
for auto recycling to happen (90 days).  All backups are to disk volumes.  
What's the best way to do that?  If I mark them 'used will that work?  If not 
do I need to purge them?

Version: 5.0.0


Thanks in advance!
-- Craig
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic overflow block problems

2013-01-17 Thread Josh Fisher

On 1/17/2013 2:10 PM, Ruth Ivimey-Cook wrote:

Josh Fisher wrote:

You don't.
I find it very strange that returning device full from a volume 
write can reasonably be interpreted as device not quite full.
The trick is to define a maximum volume size and number of volumes on 
the drive so that it is impossible to reach 100% of the physical 
drive's capacity. This will prevent the i/o error, and Bacula will 
instead hit end of volume and seek another volume. Of course, if no 
existing volumes can be recycled yet, then there simply isn't enough 
space on the drive. In that case, it is easy to add another drive to 
an existing autochanger, since vchanger allows for multiple 
simultaneous magazine drives.
I don't understand how to do this then without defining the number of 
volumes so low that I waste huge amounts of space on the drives as a 
matter of course.


One way is to partition the drives. Keeping volumes of the same size on 
the same partition allows specifying the exact number of volumes. Each 
partition is a magazine, and any number of partitions can be used 
simultaneously. For example, break a 1 TB drive into two partitions, one 
200 GB partition holding 10 volumes in a pool with a max volume size of 
~20 GB for incremental jobs, and an 800 GB partition holding 8 volumes 
in a pool with max volume size of 100 GB for full jobs. Etc.




A little more detail about what I'm doing:

  * Some backups are assigned longer retention times than others -
e.g. some full backups live for a year, some incrs live for just 3
months.
  * I have various max volume sizes from 20GB to 400GB, assigned to
each file pool depending on the likely size of a backup (e.g.
incrs are likely smaller than full) so that a volume will expire
in a reasonable time - I don't want 100GB of backups to be kept
alive (and using space) because they are in the same volume as
more recent backups that haven't expired yet.
  * I have set up 24 volumes per disk so that, should the volumes be
shorter 90GB ones, I don't (on average) run out of volumes too
quickly.
  * The result is that most disks are reasonably full most of the
time, which is good.

To be honest, I wish Bacula had a disk mode in which the concept of 
volumes was mostly eliminated: devices had backup pools and backups 
within them and it would be backups that were recycled. It would make 
much more sense for a random-access medium.


True, but Bacula must also work with tape drives, and that would be a 
very extensive rewrite.




Would an alternative solution be to adapt the vchanger program so that 
it monitored disk space and returned device full early?


No, because vchanger only runs very briefly when Bacula requests a 
volume be loaded or unloaded. It basically points Bacula to the 
particular volume file it is to use and then exits. Bacula reads/writes 
the file directly, so there is no interaction between vchanger and 
Bacula when the data is actually being written




Ruth

--
Software Manager  Engineer
Tel: 01223 414180
Blog:http://www.ivimey.org/blog
LinkedIn:http://uk.linkedin.com/in/ruthivimeycook/  


--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to force a volume to recycle so it can be used immediately

2013-01-17 Thread Marcello Romani
Il 17/01/2013 21:36, Craig Isdahl ha scritto:
 All -

 I goofed on a config change and have multiple copies of 20gb data
 backups - I don't need all that so I'd like to recycle them immediately
 rather than waiting for auto recycling to happen (90 days). All backups
 are to disk volumes.  What's the best way to do that?  If I mark them
 'used will that work?  If not do I need to purge them?

 Version: 5.0.0


 Thanks in advance!
 -- Craig


 --
 Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
 MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
 with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
 MVPs and experts. ON SALE this month only -- learn more at:
 http://p.sf.net/sfu/learnmore_122712



 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



Used volumes are not selected for new backups.

I think you should use the purge command on each volume you want to empty.
Like:

purge jobs volume=name of the volume

-- 
Marcello Romani

--
Master HTML5, CSS3, ASP.NET, MVC, AJAX, Knockout.js, Web API and
much more. Get web development skills now with LearnDevNow -
350+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122812
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users