Re: [Bacula-users] volume Labeling with Autochanger

2013-09-20 Thread Steve Ellis
I'm not an expert, but my guess would be that no, this won't affect which
drive will be used for backup--except to the extent that the tape in
question is left in that drive until the backup runs--my understanding is
that a mounted ( suitable) tape, in whichever drive, is preferred when
running a backup.

I assume that your library has more than one drive (or it wouldn't
prompt)--in general, you can receive more helpful responses if you let us
see at least part of your config (in this case especially the sd config).
 You can always anonymize the passwords  such.

-se


On Fri, Sep 20, 2013 at 12:57 PM, Deepak dee...@palincorporates.com wrote:

 Dear Team,

 While labeling a new volume My system is asking me to enter auto
 changer drive.

 See Below command output:

 *label barcodes slot=1 pool=pool1
 Automatically selected Catalog: catalog1
 Using Catalog catalog1
 The defined Storage resources are:
   1: File
   2: Autochanger
 Select Storage resource (1-2): 2
 Enter autochanger drive[0]:



 Please tell me, Is this Enter autochanger drive[0]: option in anyhow
 is related to backup/restore job(will this selection plays any role in
 selecting drive used while job execution?).


 Thanks in advance...!






 --
 LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
 1,500+ hours of tutorials including VisualStudio 2012, Windows 8,
 SharePoint
 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack
 includes
 Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13.
 http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 

They that can give up essential liberty to obtain a little temporary safety
deserve neither liberty nor safety.

   --Benjamin Franklin, 1759
--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Wake-on-LAN and bacula

2013-09-05 Thread Steve Ellis
My bacula installation supports night-time backups of several Windows
machines (win7 and xp) that are typically suspended when the backups start
from a Linux-based server.  I made sure to enable Wake-on-LAN on all of the
windows clients, and arranged a script to wake them, but Windows often
believes that the machine is idle during a backup and the machines used to
go down during a backup.

I updated my python script to run a daemon process for each machine to keep
them awake during backup, it is relatively short so I'm including it here,
on the off-chance that it proves helpful to anyone--I'm happy to answer any
questions about it.  It is dependent on a library daemon.py (note: not the
pending daemon PEP), that I downloaded here:
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python,
but I believe this github implementation would also probably work:
https://github.com/stackd/daemon-py

Note that the script does require your host-MAC address mapping, since the
machines may be down, you can't just ARP for them  (I use a python
dictionary, but this could be extended to a db for those with larger
sites--especially if you already track MAC addresses).  To make a Windows
machine do WoL, you likely will have to both configure WoL in the BIOS and
also make a config change in windows on the LAN interface to enable WoL.

Here's how I use my script in my bacula-dir.conf:
RunBeforeJob = /etc/bacula/wake_up.py --daemon start %c
RunAfterJob = /etc/bacula/wake_up.py --daemon stop %c

I don't recall if this assumes that the hostname and the client name are
the same, I suspect it does.

-se

-- 

They that can give up essential liberty to obtain a little temporary safety
deserve neither liberty nor safety.

   --Benjamin Franklin, 1759


wake_up.py
Description: Binary data
--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58041391iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to change /var/spool/bacula

2013-05-15 Thread Steve Ellis
the make_catalog_backup script would need to be changed to have a different
working directory (at least with my version of bacula).  I made that change
myself some time back (I also like to keep the bacula.sql file around--so I
didn't change delete_catalog_backup).  Alas, since I install from fedora
RPMs, my local changes have recently been lost due to an updated
RPM--however, as I recall, the only change is the wd variable in the script
(my perl is weak enough that I was happy that I was able to figure out even
that triviality).

-se


On Wed, May 15, 2013 at 10:45 AM, Patrick McEvoy psmce...@gmail.com wrote:

 Hello,

 Is it possible to change the location where the files in
 /var/spool/bacula are written to?

 The size of my bacula.sql file is getting quite large so I am trying to
 move the file which is stored in /var/spool/bacula to a different
 partition with more space.  I have changed the WorkingDirectory
 parameter in bacula-dir.conf, bacula-fd.conf and bacula-sd.conf from
 WorkingDirectory = /var/spool/bacula to WorkingDirectory =
 /home/bacula and in bacula-dir.conf I changed the FileSet for the
 Catalog from File = /var/spool/bacula/bacula.sql to File =
 /home/bacula/bacula.sql.  When I run the BackupCatalog job after
 restarting bacula-dir.conf, bacula-fd.conf and bacula-sd.conf the
 bacula.sql is still being written to /var/spool/bacula.

 Thanks for the help,

 Patrick


 --
 AlienVault Unified Security Management (USM) platform delivers complete
 security visibility with the essential security capabilities. Easily and
 efficiently configure, manage, and operate all of your security controls
 from a single console and one unified framework. Download a free trial.
 http://p.sf.net/sfu/alienvault_d2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 

They that can give up essential liberty to obtain a little temporary safety
deserve neither liberty nor safety.

   --Benjamin Franklin, 1759
--
AlienVault Unified Security Management (USM) platform delivers complete
security visibility with the essential security capabilities. Easily and
efficiently configure, manage, and operate all of your security controls
from a single console and one unified framework. Download a free trial.
http://p.sf.net/sfu/alienvault_d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I add /etc/bacula to the Catalog backup?

2013-01-25 Thread Steve Ellis
Due to disk layout on my system, I have the DB dump stored elsewhere on my
server, and I changed the catalog backup to not delete the DB dump.
 Assuming some kind of less catastrophic crash, my hope would be to restore
the DB from the on-disk copy.  I maintain the DB dump, bootstrap files and
'important' non-tape backups on a separate physical disk in my system.

Another perhaps interesting step that I took to give me more options after
a crash was to write a script to more intelligently handle bootstrap files
(i.e. eliminate unnecessary backup records when differential backups are
used) and to copy them to a remote server I have access to via ssh.  I can
share my script if there is interest, it is only about 70 lines of python
code--I haven't checked, but perhaps bacula has been changed to better
handle differentials when writing bootstrap files natively.

-se


On Fri, Jan 25, 2013 at 10:44 AM, Alan McKay alan.mckay+bac...@gmail.comwrote:

 On Fri, Jan 25, 2013 at 11:17 AM, Alan McKay
 alan.mckay+bac...@gmail.com wrote:
  File = /var/lib/bacula/bacula.sql

 I went looking for this file out of curiousity, and found it was not there.

 But upon further digging, it looks like the script dumps the DB, then
 backs it up, then removes that file.

 Do I have that right?

 And further, before creating the file in the first place it removes
 one that may already be there, by the looks of it.

 Just making sure I got that right.  Is there a reason not to leave the
 file there until next time?  Just a space issue?


 --
 Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
 MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
 with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
 MVPs and experts. ON SALE this month only -- learn more at:
 http://p.sf.net/sfu/learnnow-d2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 

-se

--
They that can give up essential liberty to obtain a little temporary safety
deserve neither liberty nor safety.
   --Benjamin Franklin, 1759
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excruciatingly slow backup to Tape - local and remote

2012-12-17 Thread Steve Ellis
You don't mention the technology behind your tape drive, database backend,
CPU, RAM, or what your disk subsystem looks like--all of which would be
useful to have a reasonable chance to analyze even vaguely properly, but
I'll wade in nonetheless.

You are almost certainly shoeshining the heck out of your tape media.  I
believe most drive types will have to resort to stopping and starting once
you get below a certain threshold (for LTO3, I think this is somewhere
around 30MB/sec, as an example), the tape drive will wind up starting a
stopping very frequently (as well as rewinding a bit).  I imagine this will
wear out both tapes and drive mechanisms pretty quickly.

Since you are not spooling data, I suspect you also are not spooling
attributes--my first guess would be that your database backend is
insufficiently fast--at a minimum you should spool attributes, and spooling
data on even the local backups can yield performance gains, depending on
your disk subsystem architecture.   If network and disk subsystem can't
deliver at least the minimum streaming rate for your tape drive, you need
to spool to a fast enough disk to keep up with the tape drives needs.

If you are using something like LTO5, I think spooling using either SSD or
RAID0 may almost be required--but perhaps LTO5 actually supports relatively
low minimum streaming rates--I don't know, as I can't afford LTO5 (and I
think the minimum streaming rate may well be manufacturer dependent).

-se


On Mon, Dec 17, 2012 at 3:47 PM, shockwavecs bacula-fo...@backupcentral.com
 wrote:

 I'm only seeing around 2MB/s write speed to tape. I have read all about
 spooling the data, etc. I would expect this to be an issue if I was
 complaining about getting 18-25 MB/s instead of 90-100MB/s maybe? Anyways,
 I simply cannot understand why a single job runs at 2 MB/s and 3 jobs will
 run about 2.5MB/s total. See output below (all servers in the same
 rack/switch running at the same time):

 Writing: Full Backup job MSSERVER2 JobId=24 Volume=BAL_01
 pool=Default device=AIT-4 (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=18,121 Bytes=9,726,633,719 Bytes/sec=1,008,254
 FDReadSeqNo=305,615 in_msg=251883 out_msg=5 fd=6

 Writing: Full Backup job TS JobId=25 Volume=BAL_01
 pool=Default device=AIT-4 (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=20,896 Bytes=5,088,180,026 Bytes/sec=691,234
 FDReadSeqNo=256,355 in_msg=195224 out_msg=5 fd=8

 Writing: Full Backup job BALDC1 JobId=26 Volume=BAL_01
 pool=Default device=AIT-4 (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=39,608 Bytes=4,514,041,685 Bytes/sec=658,503
 FDReadSeqNo=411,027 in_msg=295248 out_msg=5 fd=10



 Here is the output of backing up the bacula server itself (this proves
 spooling would not help me, right?):


   Build OS:   i686-pc-linux-gnu redhat
   JobId:  12
   Backup Level:   Full (upgraded from Incremental)
   Client: bacula-fd 5.2.6 (21Feb12)
 i686-pc-linux-gnu,redhat,
   FileSet:Full Set 2012-12-17 11:24:20
   Pool:   Default (From Job resource)
   Catalog:MyCatalog (From Client resource)
   Storage:SONY (From command line)
   Scheduled time: 17-Dec-2012 11:24:18
   Start time: 17-Dec-2012 11:24:22
   End time:   17-Dec-2012 12:34:46
   Elapsed time:   1 hour 10 mins 24 secs
   Priority:   10
   FD Files Written:   118,347
   SD Files Written:   118,347
   FD Bytes Written:   8,199,458,340 (8.199 GB)
   SD Bytes Written:   8,215,432,405 (8.215 GB)
   Rate:   1941.2 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Accurate:   no
   Volume name(s): BAL_01
   Volume Session Id:  1
   Volume Session Time:1355761332
   Last Volume Bytes:  8,225,667,072 (8.225 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  OK
   SD termination status:  OK
   Termination:Backup OK
  Begin pruning Jobs older than 6 months .
  No Jobs found to prune.
  Begin pruning Files.
  No Files found to prune.
  End auto prune.

 +--
 |This was sent by shockwav...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
 Remotely access PCs and mobile devices and provide instant support
 Improve your efficiency, and focus on delivering more value-add services
 Discover what IT Professionals Know. Rescue delivers
 http://p.sf.net/sfu/logmein_12329d2d
 ___
 Bacula-users mailing 

Re: [Bacula-users] slow despool to tape speed?

2012-04-11 Thread Steve Ellis
On 4/10/12 4:36 PM, Steve Costaras wrote:
 I'm running bacula 5.2.6 under ubuntu 10.04LTS this is a pretty simple setup 
 just backing up the same server that bacula is on as it's the main fileserver.

 For some background:  The main fileserver array is comprised of 96 2TB drives 
 in a raid-60 (16 raidz2 vdevs of 6 drives per group)).   I have a seperate 
 spool directory comprised of 16 2TB drives in a raid-10.  All 112 drives are 
 spread across 7 sas HBA's.   Bacula's database (running sqlite v3.6.22) is on 
 a SSD mirror.  This is going to an LTO4 tape system.The base hardware of 
 this system is a dual cpu X5680 box with 48GB of ram.

 doing cp, dd, and other tests to the spool directory is fine (getting over 
 1GB/s), likewise when bacula is running a full backup (10 jobs different 
 directories in parrallel) I'm able to write to the spool directory (vmstat) 
 at well over 700MB/s.btape and dd testing to the tape drive from the 
 spool directory seems to be running fine at 114MB/s which is good.

 Now when bacula itself does the writing to the tape (normal backup process) I 
 am only getting about 50-80MB/s which is pretty bad.

 I'm trying to narrow down what may be the slowdown here but am not familiar 
 with the internals that well.   My current thinking would be something in the 
 line of:

- bacula sd is not doing enough pre-fetching of the spool file?

- perhaps when the spool data is being written to tape it needs to update 
 the catalogue at that point? (the database is on a ssd mirror (Crucial C300 
 series 50% overprovisioned) I do see some write bursts here (200 w/sec range) 
 utilization is still well below 1% (iostat -x))
Your hardware is _much_ better than mine (spool in non-RAID, no SSD, 
LTO3, 4GB RAM, etc), yet I get tape throughput not a lot lower.  At the 
despool rates you are seeing, I'm guessing you may be shoeshining the 
tape on LTO4.  Here's the results I saw from my most recent 3 full 
backup jobs (presumably tape despool for incrementals is similar, but 
mine are so small as to include way too much start/stop overhead):
Job1:
Committing spooled data to Volume MO0039L3. Despooling 45,237,900,235 
bytes ...
Despooling elapsed time = 00:13:11, Transfer rate = 57.19 M Bytes/second

Job2:
Writing spooled data to Volume. Despooling 85,899,607,651 bytes ...
Despooling elapsed time = 00:27:47, Transfer rate = 51.52 M Bytes/second
Committing spooled data to Volume MO0039L3. Despooling 21,965,154,476 
bytes ...
Despooling elapsed time = 00:09:19, Transfer rate = 39.29 M Bytes/second

Job3:
Committing spooled data to Volume MO0039L3. Despooling 74,877,523,075 
bytes ...
Despooling elapsed time = 00:23:07, Transfer rate = 53.98 M Bytes/second

Based on all the backups I've examined in the past, 50MBytes/sec is 
generally about typical for me.

I have a few questions about your setup:

* do you have Spool Attributes enabled?  -- I think this may help.

* have you increased the Maximum File Size in your tape config? --My 
tape throughput went up somewhat when I went to 5G for the file size (on 
LTO4 you might even want it larger, my drive is LTO3).

* why are you using sqlite?  I was under the impression it was 
non-preferred for several reasons (I'm using mysql, but I believe 
postgres is somewhat preferred).

* have you increased the Maximum Network Buffer Size (both SD and FD) 
and Maximum Block Size (SD only)?  I have both at 256K, as I recall 
Network Buffer Size can't be larger (but I'm not sure that this will 
help your despool anyway), I think the block size change _may_ make 
previously written tapes incompatible, I don't remember why I went to 
256K on block size, but I believe the default is 64K.

If I were in your situation, I would see if Spool Attributes and 
Maximum File Size helped (these are easy to change, after all).  If 
that didn't help, I would move to a different database backend (this is 
probably a lot harder).  Then at that point, maybe Maximum Block Size 
(more reluctant to change if it does cause tape incompatibility).

-se


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Steve Ellis
On 4/5/12 8:21 AM, Abdullah Sofizada wrote:
 Hi guys, this is a very weird one. I been trying to tackle this for the
 past two weeks or so to no avail...

 My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
 Redhat Rhel 5.5 running bacula 5.0.2.

 Each of the bacula clients are less than 15 GB of data. Backups of each
 client were fine. But two weeks ago the backups for each of these
 clients ballooned to 550GB each!!

 When I do a df -h... the servers only show 15GB of space used. The one
 difference I noticed in the past two weeks is...I added these servers to
 our new IPA domain. Which in essence is an ldap server using kerberos
 authentication for identity management. This server runs on Rhel 6.2.

 I have many other clients which are not part of the IPA domain that are
 backing up just fine. So I'm sure it has something to do with this. I
 have even tried to remove my bacula clients out of the IPA domain, than
 ran a backup. But it still reports at 550GB of data being backed up.

 I appreciate the help...



Complete guess: Does something that you added use one or more 'large' 
sparse files?  If so, either exclude those files from the backup, if 
they are say, log-type files, or turn on sparse file detection in the 
fileset (add sparse=yes to the fileset Options resource, as I recall).  
The only other thing I could think of is if you are using 'onefs=yes' 
and something introduced links such that most of your data is now being 
backed up multiple times.

-se

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] new jobs cannot spool until existing jobs finish despooling?

2012-03-23 Thread Steve Ellis
On 3/23/12 1:20 AM, Kern Sibbald wrote:
 Hello,

 This is in response to the email from Jesper (see below).  As it is
 not always obvious, I am not in the least upset in any way.  This is
 meant to be information about our future direction, and more
 directly a response to Jesper's concerns and questions.

 This is in general, not just to Jesper:
 You may not agree with me.  That is OK.  If you do, that is even
 better.  Please don't
 get upset though, there are enough things in the world going wrong that
 it is
 not worth the effort to complain to someone who has and is giving you
 lots of very
 useful *free* software.

 Best regards,
 Kern

 On 03/23/2012 07:10 AM, Jesper Krogh wrote:

 Hi Kern.

 Awesome. This is some of the key features Bacula has been lacking.
 Others are: #10 and #29

 It is your call (or Bacula Systems), but I really would suggest that
 you'd try to push a mail to the Bacula-mailing list of the amount
 of direct funding needed to just develop these features directly
 in the open source version of Bacula. My feeling is that you're
 silently killing of the Open Source bacula by following this path
 as strongly as you do.

 I like and enjoy working with Bacula. Bacula has been a corner stone
 in our setup for more than 7 years, pushing over 1 petabyte of data out
 to tape, using different autochangers, LTO-generations and been adoptable
 and flexible along the way. 1)

 As with all other compontent in the IT-setup the amount of work time
 and money there has gone into Backup over the years is significant.

 The problem is, I dont think there is a single person on this planet
 running bacula in an non-Enterprise context. The amout of work,
 hardware and time needed to run
 a decent backup system with tapes and autochangers (which
 is the corner where Bacula is truly awesome), is highly
 overlapping with the enterprise segment, so the money is there.

Not to stick my nose in where it isn't wanted, but I just wanted to say 
that I am at least one single person running bacula in a non-Enterprise 
context--I use it on my home Linux server, and back up that server, 2 
windows XP clients and 3 windows 7-64 clients.  I can assure you that my 
wife and my less-than-teenage children are not using their computers as 
part of a business enterprise.  However, it was difficult to convince 
the wife that it made sense to purchase a used 16-slot LTO3 changer 
($1000 on Ebay), which was an upgrade to the previous non-changer LTO2 
drive we used (coincidentally, also around $1000 on Ebay).  My active 
backup library has about 15TB of data, on approximately 50 tapes.  I do 
think that for any site smaller than mine, the economical choice might 
be to avoid tape--but many people seem to be using bacula fine with 
disk-only solutions (and I too use disk-based backup for things I'm more 
likely to have to restore)

I've had some technical conflict with how Kern runs things in 
Bacula--but it is his project and his prerogative to handle matters as 
he sees fit.  Overall I'm thrilled with the community version, and 
though I have not contributed in a financial sense, I've tried to 
pay-it-forward by helping from time to time on the bacula-users mailing 
list.  I don't believe that the community version is in jeopardy, mostly 
due to the extremely active user community.

Thanks Kern, for everything you do!  And also thanks to all Bacula 
contributors--both technical and financial,

-se



--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup fails because VSS can't be created

2012-03-20 Thread Steve Ellis
On 3/20/12 6:50 AM, Gustavo Gibson da Silva wrote:
 Hi there,

 I  have several machines with different disk configurations (some have
 c:,d: and e:, others have c: and others have c: and d:) sharing the same
 fileset. If Bacula 5.0.3 should not find some folders (for instance
 d:\systemstate on a machine with a c: unit only) complained about it but
 the backup ended OK.

 Now I have upgraded to bacula 5.2.6 and some backups stopped working.
 Bacula complains that it can't create VSS on d:\ and produces a fatal
 error when the computer doesn't have this disk unit.

 Is there something that I have missed on the upgrade?

 TIA,

 Gustavo.

I had this problem when moving from 5.0 to 5.2, and solved it by 
creating different filesets for windows machines with c-only, 
c-and-d-only, etc.  Annoying, but it solved the problem (and my network 
is quite small).  Hopefully there will eventually be a better 
option--but I confess I didn't try to file it as a bug.

-se


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large backup to tape?

2012-03-08 Thread Steve Ellis
On 3/8/12 9:38 AM, Erich Weiler wrote:
 Thanks for the suggestions!

 We have a couple more questions that I hope have easy answers.  So, it's
 been strongly suggested by several folks now that we back up our 200TB
 of data in smaller chunks.  This is our structure:

 We have our 200TB in one directory.  From there we have about 10,000
 subdirectories that each have two files in it, ranging in size between
 50GB and 300GB (an estimate).  All of those 10,000 directories adds to
 up about 200TB.  It will grow to 3 or so petabytes in size over the next
 few years.

 Does anyone have an idea of how to break that up logically within
 bacula, such that we could just do a bunch of smaller Full backups of
 smaller chunks of the data?  The data will never change, and will just
 be added to.  As in, we will be adding more subdirectories with 2 files
 in them to the main directory, but will never delete or change any of
 the old data.

 Is there a way to tell bacula to back up all this, but do it in small
 6TB chunks or something?  So we would avoid the massive 200TB single
 backup job + hundreds of (eventual) small incrementals?  Or some other idea?

 Thanks again for all the feedback!  Please reply-all to this email
 when replying.

 -erich
Assuming the subdirectory names are somewhat reasonably spread through 
the alpha space, can you do something like:
FileSet {
 Name = A
 Include {
 File = /pathname/to/backup
 Options {
 Wild=[Aa]*
 }
 }
}
...
FileSet {
 Name = Z
 Include {
 File = /pathname/to/backup
 Options {
 Wild=[Zz]*
 }
 }
}

Then, specify separate Jobs for each FileSet.  To break things up more 
you might need to break up on second or later characters rather than the 
first one, and you'd need to include FileSets as well any directories 
starting with non-alpha characters.  Certainly this could be somewhat 
annoying to make sure you are covering all of your directories, 
especially if the namespace is populated very lopsidedly, but I believe 
it would work.  Note that I have not tried this approach, but it does 
seem feasible.

I hope you are using a filesystem that behaves well with so many 
subdirectories from one parent (for example, ext3 without dir_index 
would likely do somewhat poorly).

-se




--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] excluded folders being included in the backup

2012-03-08 Thread Steve Ellis
On 3/8/12 3:34 PM, Gary Stainburn wrote:
 On Thursday 08 March 2012 20:35:10 Andrea Conti wrote:
 Hello,

 I've added exclude entries for most of the folders
 I don't want to back up but they're still being included,
 Which folders are still being included? All of them or just some?

 A lot of the default folders under windows 7 (for example 'Application
 Data' and 'Local Settings' in every user directory) are really junction
 points, i.e. a kind of link referring to some other folder: you are
 probably excluding the junction points (which bacula does not descend
 into anyway) instead of the actual folders.

 You can use dir /a in a command prompt to quickly find out what
 folders are really junctions and what they point to.

 andrea
 Hi Andrea,

 Examples are:

 C:/Windows/winsxs/
 C:/$Recycle.Bin/
 C:/dell

Just a guess, but I'm thinking that this may be due to the trailing '/' 
in your 'wild' entries.  I know that for my systems, when I exclude 
(with Ignore Case), I'm able to exclude both files and directories 
appropriately with wild, but I never put trailing '/' in the directory 
names.  Also, just to check I ran a simple test using 'bwild' (a tool I 
was not previously aware of) and that suggested that the trailing '/' 
might be a problem (unclear since bwild doesn't actually look at 
filesystems, instead it takes filenames in a file).

-se



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encrypting Data on Tape

2012-01-10 Thread Steve Ellis
On 1/10/12 1:12 PM, Craig Van Tassle wrote:
 I'm sorry if this has been asked before.

 I'm running a Scalar 50 with HP LTO-4 Drives. I want to encrypt the
 data that is put on the tape, We already have encryption going between
 the Dir/SD and FD's. I just want to encrypt the data that will be
 placed on Tape for OffSite storage.

 Has anyone done that or know some pointers to point me to so I can get
 this working?

 Thanks!

I don't have an LTO-4, but as I recall, LTO-4 drives can implement 
encryption on the drive itself--I'm not sure this is sufficient for your 
purposes, or how to make bacula do it, but doing the encryption in the 
drive would likely be better for performance (drive compression would 
happen before encryption, bacula's encryption may well be 
single-threaded, etc).

-se

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Steve Ellis
On 7/25/2011 6:14 PM, James Harper wrote:
 2011/7/25 Rickifer Barrosrickiferbar...@gmail.com:
 Hello Guys...

 This weekend I did a backup with a size of 41.92 GB that took 1 hour
 and 24
 minutes with a rate of 8.27 MB/s.

 My Bacula Server is installed in a IBM server connected in a Tape
 Drive LTO4
 (120 MB/s) via SAS connection (3 Gb/s).

 I'm using Encryption and Compression Gzip6.
 Disable software compression. The tape drive will compress much faster
 than the client.

 If you can find compressible patterns in the encrypted data stream then
 you are not properly encrypting it. The only option would be to compress
 before encryption which means you can't use the compression function in
 the tape drive unless the tape drive also does the encryption (some do).

 Use a lower GZIP compression level to see if it gets you better speed
 without sacrificing too much performance... I suspect the speed hit is
 going to be the encryption though.

 James

I was under the impression that _all_ LTO4 drives implemented encryption 
(though if having the data traversing the LAN encrypted is your goal, 
you'd still have to do something).  I don't know enough about it to know 
how good the encryption in LTO4 is, however (or for that matter, how the 
key is specified).

Both encryption and compression in SW are going to be much slower than 
the tape drive could do it (which is why LTO4 required it, as I 
understood).  Another point, even with your current config, if you 
aren't doing data spooling you are probably slowing things down further, 
as well as wearing out both the tapes and heads on the drive with lots 
of shoeshining.

-se

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Steve Ellis
On 7/26/2011 5:04 AM, Konstantin Khomoutov wrote:
 On Tue, 26 Jul 2011 00:18:05 -0700
 Steve Ellisel...@brouhaha.com  wrote:

 [...]
 Another point, even with your current config, if you
 aren't doing data spooling you are probably slowing things down
 further, as well as wearing out both the tapes and heads on the drive
 with lots of shoeshining.
 (I'm asking as a person having almost zero prior experience with tape
 drives for backup purposes.)

 Among other things, I'm doing full backups of a set of machines to a
 single tape--yes, full backup each time, no incremental/differential
 which means I supposedly have just straightforward data flows from
 FDs to the SD.  At present time I have max concurrent jobs set to 1
 on my tape drive resource and no data spooling turned on.
 Would I benefit from enabling data spooling in this scenario?

 To present some numbers, each machine's data is about 50-80G and I can
 use about 200G for the spool directory which means I could do spooling
 for 3-4 jobs in parallel (as described in [1]).
 Would that improve tape usage pattern?

 1. http://www.bacula.org/en/dev-manual/main/main/Data_Spooling.html


OK, perhaps I'm not the best person to ask, but here's what I do know:

Even with only 1 job at a time, if you aren't able to deliver data to 
the drive at its minimum streaming data rate (for LTO4, probably at 
least 40MB/sec--possibly varies by manufacturer), then the tape 
mechanism will have to stop, go back a bit, wait for more data, then 
start up again--all of this takes time, and increases wear on the tapes 
and drive heads.  If you enable data spooling when you can't keep up 
with the drive anyway, even with a fairly modest spool size of 10-20G 
per job, I believe you will find that your backups will at least not be 
slower, and may well proceed faster, even with the overhead of spooling 
(assuming that your spool disk(s) are able to send data to the drive 
fast enough to hit near the maximum rate the drive can accept).  If you 
are using concurrent jobs, there is a further benefit:  the data for all 
jobs won't be completely shuffled on the tape.  If I recall, data 
spooling in bacula implicitly turns on attribute spooling, which can 
also help, I believe, if there are lots of small files in your backup.

You don't have to spool an entire job in order to take advantage of 
spooling--and with multiple concurrent jobs, while one is despooling 
others can be spooling (have to watch out for whether your spool area 
can keep up with all the writes and reads, though).

I'm still on LTO3, but I believe that some people advocate RAID0 for 
spool disks for LTO4.  I'm using an otherwise completely idle single 
drive for spooling 3 concurrent jobs and as far as I've noticed, I'm 
able to stream data to the drive at a rate it is happy with (again to LTO3).

I hope this helps,

-se

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore of files when they don't list

2011-06-23 Thread Steve Ellis
On 6/23/2011 1:31 PM, Troy Kocher wrote:
 Listers,

 I'm trying to restore data from medicaid 27, but it appears there are no 
 files.  There is a file corresponding with this still on the disk, so I think 
 it's just been purged from the database.

 Could someone help me thru the restore process when the files are no longer 
 in the database.

 Thanks!

 Troy

There are really only 3 options here that I can think of:
 1) restore the entire job (probably to an temporary location), then 
prune the bits you don't want.
 2) use bscan of the volume to recreate the file list in the db 
(note that I have only used this when the job itself had been expired 
from the DB)
 3) restore a dump of the catalog that contains the file entries 
that you wanted that have been expired

I'm pretty sure I've done both #1 and #2, #3 I'd be much more reluctant 
to just try, as I would worry about clobbering more recent catalog data, 
unless you used a separate catalog db for the restoration.  Unless the 
job is really huge, I'd probably do #1, because bscan is (slightly) 
dodgy, especially for backups that span volumes (IMHO, note that it is 
_much_ better than not having bscan at all).  Sorry I can't provide more 
detail, hopefully someone else will be able to help more.

-se

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encryption times [SEC=UNCLASSIFIED]

2011-06-09 Thread Steve Ellis

Alan-

I've actually not used encryption, but certainly encryption will mean 
that you will get no benefit from whatever compression your tape 
hardware may be capable of--possibly doubling backup time right there, 
if you were able to keep your tape drive writing at full speed.  I do 
know that encryption runs on the client and, I believe, is 
single-threaded--I would expect that would at least account for a 
doubling of the spooling time, since the SW encryption will dramatically 
slow down the client.  If you are not using spooling, then it is likely 
that your tape drive is shoeshining now--which can dramatically increase 
backup time and reduce tape  drive lifetime.   Enabling spooling, if 
you are not currently doing it, may help, if the tape drive is 
shoeshining a lot.


If your tape library is capable (LTO4 or better, for example), you may 
want to enable encryption in the drive, instead of in SW.  If you can't 
use encryption from the tape drive, then you may even consider doing 
compression in SW as well--depending on your data this may 
(surprisingly) help--less data to encrypt, and less data to write to 
tape--but compression is also single threaded, IIRC, so that can easily 
also make things even worse.


Sorry I can't be more help,

-se

On 6/8/2011 10:40 PM, Alan Langley wrote:


UNCLASSIFIED

Hi Everybody,

I've just setup encryption on our bacula backup using the explaination 
in chapter 39 of the Bacula manual -- it has blown out our backup time 
from overnight to 3 days ? Is this normal ? Is there any way to get 
the time down? It is only backing up 1.5Tb onto a tape library


Alan Langley

Digital Preservation

Systems Manager

The National Archives of Australia

http://www.naa/gov/au




--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BACULA: free space on tape

2011-05-10 Thread Steve Ellis
On 5/10/2011 12:36 AM, mulle78 wrote:
 Hello @all, I've purged a volume and appended a new back up to the tape. Is 
 there a way to find out the free space on a tape to be shure that a reorg of 
 a tap has be done successfully?!

 +--
 |This was sent by mulle...@gmx.de via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

If the volume was purged, then Bacula will write from the beginning.  
There is no way to know how much space is left on a tape (unless you 
have compression disabled--and then probably only approximately), 
however, you can see how much data Bacula believes is on the tape--if 
that is relatively consistent with how large your backup was, then you 
know that is the only thing on the tape.  From bconsole, list media 
will display lots of information--look at the VolBytes field to see 
how much data is stored on the tape.  Note that VolFiles are tape 
files--not your files, unless configured otherwise bacula will put a 
end-of-file mark on the tape every gigabyte (IIRC) and at the end of 
every backup that doesn't end on a gigabyte boundary for faster 
positioning during restore.

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tar and Bacula doesn't work together.

2011-05-02 Thread Steve Ellis
On 5/2/2011 1:48 AM, obviously wrote:
 Hi all

 My first post here. So don't shoot me if I say/do stupid things.

 I got a problem with Bacula. The version I use is 2.4.4 on debian etch.
Since 2.4.4 is now nearly 4 years old, you really ought to try a more 
recent version.
 My Bacula runs smootly, everything seems to work.

 But now, when I try to execute a tar command I get some errors.



 How can I solve this and is there a tool to see the tapes content when the 
 daemon is running without using bconsole?

Bacula tapes are not in a format that tar understands--the only 
meaningful way to find out what is on the tapes is either with bconsole, 
bscan, bls, or some SQL commands against the database backend that you 
are using. Of these, by far the easiest is bconsole.  Reading the tapes 
to see what is on them (i.e. using bscan or bls) is hugely slow compared 
to using the database to get that information.  If your database is 
missing what you need (either due to misconfig or disaster recovery), 
then bscan can help you.

Hope this helps,

-se


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem rejected Hello command with DIR 2.2.6 and FD 5.0.2

2011-04-20 Thread Steve Ellis
On 4/20/2011 3:26 PM, John Drescher wrote:
 This rule is not the real truth.
 I'm backing up a 2.4.4 (Debian Lenny) client on a 5.0.2 (Debian Squeeze)
 Director (and Storage)
 That does not violate the rule I gave.

 Does the Bacula team plan to provide such compatibility matrix ?

 This rule has been told to us from the developers countless times on this 
 list.

 John

Just to clarify--at least in my mail reader, part of John's answer was 
rendered as a quoted line rather than as an explicit greater-than sign.  
I believe what he sent was director same version as storage, and 
server's version greater than or equal to client.  Note that bacula 2.2 
is more than 3 years old.

-se

-se


--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PowerVault 124T tape drive

2011-04-13 Thread Steve Ellis
On 4/13/2011 4:48 AM, Steffen Fritz wrote:
 Hey folks,


 something strange is happening within my bacula configuration. Every help is 
 much appreciated!

 1. This is, what bconsole--  status tells me about my tape drive. No pool?

 Device Drive-1 (/dev/nst0) is mounted with:
  Volume:  sonntag
  Pool:*unknown*
  Media type:  LTO-4
  Drive 0 status unknown.
  Total Bytes Read=0 Blocks Read=0 Bytes/block=0
  Positioned at File=0 Block=0
 

 Used Volume status:
 sonntag on device Drive-1 (/dev/nst0)
  Reader=0 writers=0 devres=0 volinuse=0
If you are using a bacula earlier than 5.0.3 (I believe), then if there 
is a tape change during a backup, this can happen--when I've seen this 
issue, the tape was actually in a pool, it was (mostly) a display 
issue--however, it manifested that subsequent jobs couldn't start due to 
this, until the running job (that triggered the tape change) finished.  
In 5.0.3, I've heard some confirmation that this issue has been 
fixed--certainly I haven't seen the issue myself since switching to 
5.0.3, but I never saw it very often earlier--so my datapoints are 
incomplete.

-se

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I use data spooling when writing to nfs mounted storage?

2011-03-03 Thread Steve Ellis
On 3/3/2011 6:52 AM, Fabio Napoleoni - ZENIT wrote:

 JobId 7: Spooling data ...
 JobId 7: Job write elapsed time = 00:13:07, Transfer rate = 1.295 M 
 Bytes/second
 JobId 7: Committing spooled data to Volume FullVolume-0004. Despooling 
 1,021,072,888 bytes ...
 JobId 7: Despooling elapsed time = 00:02:19, Transfer rate = 7.345 M 
 Bytes/second
 JobId 7: Sending spooled attrs to the Director. Despooling 6,816,214 bytes ...

 There is only a little improvement of performances.

 So I'm asking you what is the transfer rate in your jobs? These rates are 
 normal or are they slow compared to yours?

 Thanks.

 --
 Fabio Napoleoni - ZENIT
 fa...@zenit.org

Given your environment, the despool rate is probably about what you 
would expect.  On the other hand, the job write rate is really quite 
low--assuming this was a full job when it was the only job running.  If 
you are using compression, then that can explain a lot.  Also, I don't 
think you mentioned whether the job was on a remote client or not--if it 
was, then 8-10Mbytes/sec would be about the most you could expect on a 
100Mbit switched network.  When I run only a single job at a time, I'm 
able to spool the local client at 50Mbytes/sec or more, and remote 
clients spool at ~20-30Mbytes/sec (usually running 2-3 at once on a 
gigabit network)--but I don't use compression or encryption.

Is the server fairly busy with other stuff, or is the server machine 
underpowered?

Look at spooling attributes even if you don't spool data, that may 
help.  Turn off compression (if you use it), or lower the level, unless 
you need it (with disk storage, you may want to have it).  For a 100Mbit 
network, remote clients are never going to be fast enough to drive 
multiple spindles of spool (i.e. some kind of RAID), but you should be 
able to do quite a bit better than what you are seeing (unless you need 
compression).

-se

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 124t users?

2011-02-28 Thread Steve Ellis
On 2/28/2011 9:37 AM, Jeremiah D. Jester wrote:
 
 Steve,
 I’m using a 124t w/ LT04 tapes. I would appreciate if I could see your conf 
 files for comparison.
 Gracias,
 JJ
  
 Jeremiah Jester
 Informatics Specialist
 Microbiology – Katze Lab
 206-732-6185
  
 
Tapeinfo output for the LTO3 drive  Dell 124T:
# tapeinfo -f /dev/sg1
Product Type: Tape Drive
Vendor ID: 'IBM '
Product ID: 'ULTRIUM-TD3 '
Revision: '6B20'
Attached Changer API: No
SerialNumber: '1210250007'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 6
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x28
Density Code: 0x42
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 236
Partition 0 Remaining Kbytes: -1
Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0
# tapeinfo -f /dev/sg2
Product Type: Medium Changer
Vendor ID: 'DELL'
Product ID: 'PV-124T '
Revision: '0053'
Attached Changer API: No
SerialNumber: 'CH7CB35779'
SCSI ID: 6
SCSI LUN: 1
Ready: yes

bacula-sd.conf autochanger  tape device sections (the buffer size  block size 
items are left over from some time back--may not be optimal or even helpful 
anymore):
Autochanger {
  Name = LTO3-changer
  Device = LTO3
  Changer Device = /dev/sg2
  Changer Command = /usr/libexec/bacula/mtx-changer %c %o %S %a %d
}
Device {
  Name = LTO3
  Media Type = LTO3
  Archive Device = /dev/nst0
  Autochanger = Yes
  Drive Index = 0
  Automatic Mount = Yes
  Always Open = Yes
#  Volume Poll Interval = 3 min
#  Close On Poll = Yes
# Offline On Unmount = Yes
  Removable Media = Yes
  Random Access = No
  Maximum Job Spool Size = 50G
  Maximum Block Size = 262144
  Maximum Network Buffer Size = 262144
  Maximum File Size = 5G
  Spool Directory = /backup/spool
  Alert Command = sh -c 'smartctl -H -l error /dev/sg1'
}

-se
--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 124t users?

2011-02-25 Thread Steve Ellis
I've got a Dell 124T w/ an LTO3 drive (bought used for $800 on Ebay 
last year), and so far I've had no trouble (Bacula 5.0.3 w/ Fedora 14).
I may not be a very heavy user, however, as I'm running it on a home 
network (clients are: Fedora 14, WinXP  Win7-64)--sending ~1.5TB to 
tapes in a month (fulls, differentials and incrementals).


Used a standalone LTO2 for 2 years previously with bacula.

I'll happily share my config if you think it will help.

-se

On 2/25/2011 2:26 PM, Jeremiah D. Jester wrote:


I've been having ongoing issues with bacula and my 124t. Anyone out 
there having any problems? If not, would like to know that as well. 
Maybe we can compare storage conf files?


Thanks,JJ

Jeremiah Jester
Informatics Specialist

Microbiology -- Katze Lab

206-732-6185




--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Steve Ellis
On 1/20/2011 7:18 AM, Peter Zenge wrote:

 Second, in the Device Status section at the bottom, the pool of LF-F-
 0239 is
 listed as *unknown*; similarly, under Jobs waiting to reserve a
 drive,
 each job wants the correct pool, but the current pool is listed as
 .

 Admittedly I confused the issue by posting an example with two Pools 
 involved.  Even in that example though, there were jobs using the same pool 
 as the mounted volume, and they wouldn't run until the 2 current jobs were 
 done (which presumably allowed the SD to re-mount the same volume, set the 
 current mounted pool correctly, and then 4 jobs were able to write to that 
 volume concurrently, as designed.

 I saw this issue two other times that day; each time the SD changed the 
 mounted pool from LF-Inc to *unknown* and that brought concurrency to a 
 screeching halt.

 Certainly I could bypass this issue by having a dedicated volume and device 
 for each backup client, but I have over 50 clients right now and it seems 
 like that should be unnecessary.  Is that what other people who write to disk 
 volumes do?
I've been seeing this issue myself--it only seems to show up for me if a 
volume change happens during a running backup.  Once that happens, 
parallelism using that device is lost.  For me this doesn't happen too 
often, as I don't have that many parallel jobs, and most of my backups 
are to LTO3, so volume changes don't happen all that often either.  
However, it is annoying.

I thought I had seen something that suggested to me that this issue 
might be fixed in 5.0.3, I've recently switched to 5.0.3, but haven't 
seen any pro or con results yet.

On a somewhat related note, it seemed to me that during despooling, all 
other spooling jobs stop spooling--this might be intentional, I suppose, 
but I think my disk subsystem would be fast enough to keep up one 
despool to LTO3, while other jobs could continue to spool--I could 
certainly understand if no other job using the same device was allowed 
to start despooling during a despool, but that isn't what I observe.

If my observations are correct, it would be nice if this was a 
configurable choice (with faster tape drives, few disk subsystems would 
be able to handle a despool and spooling at the same time)--some of my 
jobs stall long enough when this happens to allow some of my desktop 
backup clients to go to standby--which means those jobs will fail (my 
backup strategy uses Wake-on-LAN to wake them up in the first place).  I 
certainly could spread my jobs out more in time, if necessary, to 
prevent this, but I like for the backups to happen at night when no one 
is likely to be using the systems for anything else.  I guess another 
option would be to launch a keepalive WoL script when a job starts, and 
arrange that the keepalive program be killed when the job completes.

-se

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Steve Ellis
On 1/10/2011 7:29 AM, Guy wrote:
 Indeed it was and that for me is the right thing. It's all in subversion 
 which is it's self backed up.

 ---Guy
 (via iPhone)

 On 10 Jan 2011, at 15:18, Dan Langilled...@langille.org  wrote:

However, if you have people planning on making commits to subversion, 
you've also now excluded their (uncommitted) changes as well.  If that 
is what you want (which it could be if all your subversion checkouts are 
really 'read-only'--i.e. for reference), then you're all set--otherwise, 
if you want to exclude just the contents of the .svn directory (which 
has unmodified copies of everything in the parent directory), you might 
want to use one of the file or directory names from the .svn directory 
(there are several, unfortunately, none quite as unlikely to otherwise 
appear as .svn).

-se

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula: Keyword Name not permitted in this resource

2010-12-27 Thread Steve Ellis
I believe the error is in the file that you had previously commented out 
(as I suspect you also believe).  Below I've snipped what I think is the 
problematic line in the pc-agenda_full.conf file:

On 12/27/2010 12:00 PM, der_Angler wrote:
 pc-agenda_full.conf

...
 FileSet {
  Name = pc-agenda Full
  Enable VSS = yes
  Include {
Options {
  signature = MD5
}
File = C:\
  }
 }


The problem is that the parser interprets C:\ as a string consisting 
of a 'C', a ':' and a ''--because you've backslash escaped the quote.  
I believe that you will find an easier solution is to use a '/' instead 
of a '\', although doubling the '\' character may also work.  In other 
words try this:
 File = C:/
instead of:
 File = C:\

If this doesn't work (and I believe it will, as that is how I have my 
windows clients configured), you might also be able to use:
 File = C:\\
But I haven't tried this.

You may also find that the Ignore Case = yes option is useful on 
windows--though admittedly it may only come into play if you specify 
files or directories to either include or exclude.

Hope this helps,

-se



--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RES: LTO-3 tape not compressing data/premature end of tape space

2010-09-01 Thread Steve Ellis
  On 9/1/2010 7:09 AM, Brian Debelius wrote:
Is Maximum Volume Bytes set in the catalog for these tapes?

 On 9/1/2010 9:15 AM, Rodrigo Ferraz wrote:
 Certainly. The schedule comprises 6 different tapes, between monthly and 
 weekly pools, and the problem is exactly the same with all of them.

 Rodrigo

If you have the 'tapeinfo' command you can also check to see if the 
generic SCSI device has HW compression enabled (shows up as 
DataCompEnabled in my environment).

-se

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Steve Ellis
  On 8/31/2010 5:44 AM, Marco Lertora wrote:
Hi!

 I've the same problem! anyone found a solution?

 I have 3 concurrent jobs, which backup from different fd to the same
 device on sd.
 All jobs use the same pool and the pool use Maximum Volume Bytes as
 volume splitting policy, as suggested in docs.
 All job has the same priority.

 Everything starts good, but after some volumes changes (becouse they
 reach the max volume size) the storage lost the pool information of the
 mounted volume
 So, the jobs started after that, wait on sd for a mounted volume with
 the same pool as the one wanted by the job.

 Regards
 Marco Lertora


I have seen something very much like this issue, except with tape 
drives.  I was trying to document it more fully before sending it in.

It seems that for me, after a tape change during a backup, the SD 
doesn't discover the pool of the mounted tape until after all currently 
running jobs complete, so no new jobs can start--once all running jobs 
finish, the currently mounted volume's pool is discovered by the SD, 
then any jobs stuck because the pool wasn't known can start.  I didn't 
know a similar or the same issue affected file volumes--it is relatively 
rare that I hit the tape volume version of this problem, since not very 
many of my backups span tapes.

If this is easily reproducible with tape volumes, someone should file a 
bug report.

-se



--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mtx only sees half of the tapes in my Powervault 124T

2010-08-26 Thread Steve Ellis
  On 8/26/2010 5:23 PM, Ben Beuchler wrote:
 I just hooked up my shiny new Powervault 124T with dual magazines to
 an Ubuntu 10.04 server via SAS.  For some reason, mtx can only see the
 first (left) magazine.  The output of mtx is below.

 The front panel interface sees all 16 slots just fine.

 Any suggestions?

 -Ben
I think this was discussed pretty recently--as I recall, there is a 
configuration setting in the changer to tell it that both magazines are 
present.  I too have a PV 124T, but luckily mine came with both 
magazines enabled.

-se

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem restoring files. [SOLVED]

2010-02-25 Thread Steve Ellis
On 2/25/2010 4:28 PM, Erik P. Olsen wrote:
 I've changed the server IP-address in the storage resource to a host name and
 the windows client didn't know how to resolve that. Changing it back to
 IP-address solved the problem.

 I wonder if Windows Vista uses a host name resolution file which could be used
 to translate the host name into an IP-address?


If you have a DNS server, you can point Vista at it and it would work 
(and who doesn't have a DNS server--I have one at my house!).  However, 
based on a quick googling, vista does support a hosts file, based on 
what I found, it seems that the hosts file is in 
WINDOWS\System32\drivers\etc\HOSTS, but requires elevated privilege to edit.

-se


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SOLVED?: Dead slow backups with bacula 5.0, mysql and accurate

2010-02-20 Thread Steve Ellis

On 2/19/2010 5:55 PM, Frank Sweetser wrote:


The best way to get more data about what's going on is to use the 'explain'
mysql command.  First, get the complete SQL query that's taking too long to
run by using the 'show processlist full' command - that way the results won't
get truncated.

Then, run the query manually, but prefixed with the 'explain' command:

explain SELECT Path.Path, Filename.Name, ...

This should give you more data about exactly how mysql is going about
executing the query, which should hopefully in turn point to why it's taking
so ridiculously long and how that might be fixed.

   
Thanks, Frank, for the tip.  I tried exactly what you said, and found at 
least one helpful index addition.  Although, actually, even doing the 
explain took so long that I gave up and reviewed the make_mysql_tables 
script, which provided a possible clue, which I tried, and it not only 
made the explain go faster, but also resolved my horribly slow backup 
issue.  make_mysql_tables suggests to add INDEX (FilenameId, PathId) on 
the File table if verifies are too slow--it also recommends several 
other indices, all of which I already had (PathId, FilenameId and 
JobId).  I ran this sql query:

CREATE INDEX FilenameId_2 ON File (FilenameId, PathId);
Which took quite a while (maybe 20-30 minutes?)

Then, using the full query from 'mysqladmin -v processlist' with 
'explain' to tell me about how mysql would run the query (sorry about 
the width below, may wrap unpleasantly):
mysql explain SELECT Path.Path, Filename.Name, Temp.FileIndex, 
Temp.JobId, LStat, MD5 FROM ( SELECT FileId, Job.JobId AS JobId, 
FileIndex, File.PathId AS PathId, File.FilenameId AS FilenameId, LStat, 
MD5 FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, 
FilenameId FROM ( SELECT JobTDate, PathId, FilenameId FROM File JOIN Job 
USING (JobId) WHERE File.JobId IN (13275,13346,13350) UNION ALL SELECT 
JobTDate, PathId, FilenameId FROM BaseFiles JOIN File USING (FileId) 
JOIN Job  ON(BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN 
(13275,13346,13350) ) AS tmp GROUP BY PathId, FilenameId ) AS T1 WHERE 
(Job.JobId IN ( SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN 
(13275,13346,13350)) OR Job.JobId IN (13275,13346,13350)) AND 
T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = 
File.PathId AND T1.FilenameId = File.FilenameId ) AS Temp JOIN Filename 
ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON (Path.PathId = 
Temp.PathId) WHERE FileIndex  0 ORDER BY Temp.JobId, FileIndex ASC;

+++++--+--+-+-++-+
| id | select_type| table  | type   | 
possible_keys| key  | key_len | 
ref | rows   | Extra   |

+++++--+--+-+-++-+
|  1 | PRIMARY| derived2 | ALL| 
NULL | NULL | NULL| 
NULL| 256855 | Using where; Using filesort |
|  1 | PRIMARY| Path   | eq_ref | 
PRIMARY  | PRIMARY  | 4   | 
Temp.PathId |  1 | |
|  1 | PRIMARY| Filename   | eq_ref | 
PRIMARY  | PRIMARY  | 4   | 
Temp.FilenameId |  1 | |
|  2 | DERIVED| derived3 | ALL| 
NULL | NULL | NULL| 
NULL| 256855 | |
|  2 | DERIVED| File   | ref| 
JobId,PathId,FilenameId,JobId_2,FilenameId_2 | FilenameId_2 | 8   | 
T1.FilenameId,T1.PathId |  8 | Using where |
|  2 | DERIVED| Job| eq_ref | 
PRIMARY  | PRIMARY  | 4   | 
bacula.File.JobId   |  1 | Using where |
|  6 | DEPENDENT SUBQUERY | NULL   | NULL   | 
NULL | NULL | NULL| 
NULL|   NULL | no matching row in const table  |
|  3 | DERIVED| derived4 | ALL| 
NULL | NULL | NULL| 
NULL| 259176 | Using temporary; Using filesort |
|  4 | DERIVED| Job| range  | 
PRIMARY  | PRIMARY  | 4   | 
NULL|  3 | Using where |
|  4 | DERIVED| File   | ref| 
JobId,JobId_2| JobId_2  | 4   | 
bacula.Job.JobId

[Bacula-users] Dead slow backups with bacula 5.0, mysql and accurate

2010-02-19 Thread Steve Ellis
I don't know if this is specific to mysql or not.  My system:  Fedora 12 
x86-64, Bacula 5.0.0 (installed from fedora's rawhide), mysql 5.1.42.  
I've been happily running bacula 3.0.3 on this particular machine and 
config for several months without issue (and earlier bacula releases, 
but without Accurate for several years), but after trying an upgrade to 
5.0, I've noticed that accurate backups are now dreadfully slow (at 
least a differential is, I assume the same applies to incrementals).  If 
I turn off Accurate, backups proceed fine, but with accurate on, 
mysqladmin processlist will eventually show this query taking hours (or 
perhaps days, I haven't waited that long):


+++---++-+--+---+--+
| Id | User   | Host  | db | Command | Time | State | 
Info 
|

+++---++-+--+---+--+
| 9  | bacula | localhost | bacula | Query   | 349  | executing | SELECT 
Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5 FROM ( 
SELECT FileId, Job.Jo |
| 18 | root   | localhost || Query   | 0|   | show 
processlist 
|

+++---++-+--+---+--+
I've increased several things in my mysql config to try to make this 
better (although bacula 3.0.3 was fine with the old config), but that 
doesn't seem to be the issue.  I've checked  the mysql indices, and the 
typical ones seem to be what they should be (at least as far as I can 
tell), although I'm not familiar enough with the queries that Accurate 
does to know if I should be looking at other indices:


mysql show index from File;
+---+++--+-+---+-+--++--++-+
| Table | Non_unique | Key_name   | Seq_in_index | Column_name | 
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

+---+++--+-+---+-+--++--++-+
| File  |  0 | PRIMARY|1 | FileId  | 
A |15848513 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId  |1 | JobId   | 
A | 522 | NULL | NULL   |  | BTREE  | |
| File  |  1 | PathId |1 | PathId  | 
A |  211313 | NULL | NULL   |  | BTREE  | |
| File  |  1 | FilenameId |1 | FilenameId  | 
A |  754691 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|1 | JobId   | 
A | 522 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|2 | PathId  | 
A | 1584851 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|3 | FilenameId  | 
A |15848513 | NULL | NULL   |  | BTREE  | |

+---+++--+-+---+-+--++--++-+
7 rows in set (0.00 sec)

mysql show index from Path;
+---++--+--+-+---+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation 
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |

+---++--+--+-+---+-+--++--++-+
| Path  |  0 | PRIMARY  |1 | PathId  | A 
|  207920 | NULL | NULL   |  | BTREE  | |
| Path  |  1 | Path |1 | Path| A 
|  207920 |  255 | NULL   |  | BTREE  | |

+---++--+--+-+---+-+--++--++-+
2 rows in set (0.00 sec)

mysql show index from Filename;
+--++--+--+-+---+-+--++--++-+
| Table| Non_unique | Key_name | Seq_in_index | Column_name | 
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |


Re: [Bacula-users] Has anyone installed Bacula on Fedora 12?

2010-01-07 Thread Steve Ellis
On 1/6/2010 3:55 PM, Terry L. Inzauro wrote:
 On 01/06/2010 05:40 PM, brown wrap wrote:

   I tried compiling it, and received errors which I posted, but didn't
 really get an answer to. I then started to look for RPMs. I found the
 client rpm, but not the server rpm unless I don't know what I'm looking
 for. Can someone point me to the rpms I need? Thanks.


 greg

  
fedora distributes RPMs for bacula 3.0.3 for fc12, here's what I have 
installed:

$ rpm -qa | grep bacula
bacula-sysconfdir-3.0.3-5.fc12.x86_64
bacula-director-mysql-3.0.3-5.fc12.x86_64
bacula-docs-3.0.3-5.fc12.x86_64
bacula-console-3.0.3-5.fc12.x86_64
webacula-3.4-1.fc12.noarch
bacula-console-gnome-3.0.3-5.fc12.x86_64
bacula-traymonitor-3.0.3-5.fc12.x86_64
bacula-storage-mysql-3.0.3-5.fc12.x86_64
bacula-console-bat-3.0.3-5.fc12.x86_64
bacula-common-3.0.3-5.fc12.x86_64
bacula-client-3.0.3-5.fc12.x86_64
bacula-storage-common-3.0.3-5.fc12.x86_64
bacula-director-common-3.0.3-5.fc12.x86_64

They are packaged differently from the bacula.org RPMs.  Until fedora 
12, I built my own bacula RPMs from the src RPM, as the distributed 
bacula was 2.4.X, but now fc12 includes 3.0.3, which was good enough for 
me--I just used 'yum search bacula' to find the ones I needed.

Hope this helps,

-se

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] logwatch

2009-12-14 Thread Steve Ellis
On 12/14/2009 7:22 AM, Craig White wrote:
 A relatively new bacula 3.0.3 installation on CentOS 5

 I get an error every day from logwatch...

...
 Cannot find shared script applybaculadate

...
 *ApplyBaculaDate =

 What is it that I am supposed to do?

 Craig


I had the same error when I switched back to the Fedora supplied RPM 
after upgrading to Fedora 12.  For Fedora, there is an updated set of 
bacula RPMs in updates-testing that fixes the problem.  With the updated 
bacula-director-common RPM, applybaculadate is in 
/etc/logwatch/scripts/shared (and the .conf file still looks the same as 
yours).  If you can't get an updated RPM, copying the script to that 
location may fix the problem you are having.

Hope that helps,
-se

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network transfer Speed

2009-12-10 Thread Steve Ellis
On 12/10/2009 9:33 AM, Hayden Katzenellenbogen wrote:
 Steve,

 Here is a quick snap of my top during a full backup.

 top - 09:32:35 up 1 day, 18:38,  1 user,  load average: 11.41, 11.60,
 10.75
 Tasks: 161 total,   1 running, 160 sleeping,   0 stopped,   0 zombie
 Cpu0  :  0.3%us,  0.3%sy,  0.0%ni,  0.0%id, 99.4%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu1  : 13.0%us,  1.4%sy,  0.0%ni, 18.1%id, 66.9%wa,  0.0%hi,  0.6%si,
 0.0%st
 Cpu2  :  1.9%us,  0.3%sy,  0.0%ni,  0.0%id, 97.8%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu3  :  0.6%us,  1.3%sy,  0.0%ni, 22.7%id, 74.8%wa,  0.0%hi,  0.6%si,
 0.0%st
 Cpu4  :  0.6%us,  0.3%sy,  0.0%ni,  0.0%id, 99.1%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu5  :  0.6%us,  0.0%sy,  0.0%ni, 20.8%id, 78.3%wa,  0.0%hi,  0.3%si,
 0.0%st
 Cpu6  :  0.4%us,  0.7%sy,  0.0%ni, 52.5%id, 46.5%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu7  :  0.6%us,  0.3%sy,  0.0%ni,  7.6%id, 91.5%wa,  0.0%hi,  0.0%si,
 0.0%st
 Mem:   4057152k total,  4035996k used,21156k free,14372k buffers
 Swap:  8000360k total, 1536k used,  7998824k free,  3852936k cached

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

   6118 root  20   0 60388 2212 1296 S   24  0.1 102:35.41 bacula-fd

   6240 root  20   0  130m 2092 1348 S3  0.1 227:19.75 bacula-sd

 23214 hayden20   0 18992 1308  932 R1  0.0   0:00.24 top


 There are many occasions where the bacula-sd and bacula-fd have between
 10-50% and all the wait percentages are through the roof as you see
 above. Also as you can see the load average is also through the roof.

 -H

Hayden-

I'm not sure what else you might be running on this box, but I've 
noticed on mine that my memory usage recently went up significantly, and 
I started seeing high load average peaks when only bacula (or the db 
behind it) should have been generating high load.  It didn't seem that 
the bacula daemons themselves were using significantly more memory than 
previously, and I hadn't changed my bacula version anyway.  Further, I 
had not changed any configuration from what had been running smoothly 
before.  In my case, after detuning my MySQL config to use less memory, 
my load averages improved, and so did bacula performance.

I have not dug into this very far yet, but my guess is that either the 
kernel behavior has changed recently with some update I applied (perhaps 
significantly increased kernel buffering??), or some library used by 
lots of programs now uses more memory than previously.  Certainly, my 
box, with only 2 processors shouldn't need as much physical RAM as 
yours, but both your 8-core box and my 2-core box have 4Gigs of RAM.

I have no silver bullets here, but if see load average spike 
dramatically during a backup, and see very high wait percentages on your 
CPUs during backups, then you might look to see what programs are using 
the most VM, and seeing if they can either be temporarily turned off or 
detuned to use less memory.  An even better answer might be to try 
adding RAM to your machine (assuming its running a 64-bit OS), but the 
first approach could be done without taking the box down and without any 
expense at least just to test the theory.

Hope that helps,

-se

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network transfer Speed

2009-12-08 Thread Steve Ellis
I have what sounds like it is a less-powerful system than yours, and I 
see significantly faster performance from Bacula 3.0.2 (and before that 
with 2.4 and earlier).  My system uses a 3ware 9500 connected merely via 
32-bit PCI, and I have a single separate spool drive connected via the 
motherboards SATA ports.  My machine does have motherboard GigE, but as 
previously pointed out that shouldn't matter.  Director, File Daemon, 
Storage Daemon and MySQL all reside on the same system.

Just to reconfirm:
 * Is this the performance you see with full backups? (differentials 
and incrementals will always be slower, of course).
 * Are you avoiding using software compression?
 * Are the files from the raid array very small or with thousands of 
files per directory? (which can be important depending on the filesystem 
used).


You may want to verify that the spool drive is performing as expected by 
just doing some copying from the raid array to the spool drive--although 
I doubt that is the issue you are hitting.

-se

On 12/8/2009 2:38 PM, Hayden Katzenellenbogen wrote:
 Write cache is enabled.

 I have a separate sata drive connected to the motherboards sata controller 
 that I am using to spool the data.

 The only thing that is coming off this raid array is the data to be backed 
 up.  Writing to that array is done only once per hour and totals about 1gig 
 per day.

 I must say I am at a total loss.

 -H

 -Original Message-
 From: John Drescher [mailto:dresche...@gmail.com]
 Sent: Tuesday, December 08, 2009 2:30 PM
 To: Timo Neuvonen; bacula-users
 Subject: Re: [Bacula-users] Network transfer Speed

 On Tue, Dec 8, 2009 at 5:13 PM, Timo Neuvonentimo-n...@tee-en.net  wrote:

 John Drescherdresche...@gmail.com  kirjoitti viestissä
 news:387ee2020912081330v6e8fa197s5009971859acb...@mail.gmail.com...
  
 On Tue, Dec 8, 2009 at 4:16 PM, Hayden Katzenellenbogen
 hay...@nextlevelinternet.com  wrote:

 Yes it is a raid-6 configuration running on a 3ware 9690SA-8I.

  
 I've never had this card, it should be powerful one I think. But I've
 sometimes experienced really poor write performance with 3ware 9550 and 9650
 cards when unit's write cache was disabled (controller default). If it is
 disabled, try if it has any effect.

  
 Yes if the cache is disabled you will get horrible random write
 performance. If this is also the spool drive that can be the reason.

 John

 --
 Return on Information:
 Google Enterprise Search pays you back
 Get the facts.
 http://p.sf.net/sfu/google-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 Return on Information:
 Google Enterprise Search pays you back
 Get the facts.
 http://p.sf.net/sfu/google-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job run script error

2009-09-30 Thread Steve Ellis
On 9/30/2009 10:04 AM, Joseph L. Casale wrote:
 Hmm,
 I had the following:
RunScript {
RunsWhen = After
RunsOnFailure = Yes
FailJobOnError = Yes
Command = scp -i /path/to/key -o StrictHostkeyChecking=no 
 /var/lib/bacula/*.bsr u...@host:/path/Bacula/
Command = /usr/lib/bacula/delete_catalog_backup
}

 But I missed the part where the default is runs on client. I since then
 added RunsOnClient = No (even though the client in this case is the 
 director)
 which only changed the error to:

 AfterJob: /var/lib/bacula/*.bsr: No such file or directory

 If the same string is in a shell script, and the command simply points to it,
 it works.

 Thanks!
 jlc



The problem you are hitting is perhaps subtle, but the issue is that 
there is no shell to expand the '*' in your command line--the easiest 
answer might be to put the two commands you have in a shell script and 
invoke that instead as a run script--then, there will be a shell running 
to do the filename expansion.

Hope this helps,

-se

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS on Vista/Win2008 x64

2009-08-05 Thread Steve Ellis

On 8/5/2009 9:34 AM, Shawn wrote:

Yes, the problem is the Hello command which was introduced in 3.x

On a 2.x director, it will simply state Hello command rejected as 
the failure in connecting to the FD from the director, I've tested 
this before and go the same results regardless of the platform (Mac OS 
X PPC/Intel, Fedora 6 i386, Fedora 10 i386, Vista x64, Win2k8 x64, 
Win2k3 x64 / 32-bit) - so, no, it seems I'm stuck either praying I can 
install a properly updated Bacula director on my amd64 Ubuntu server, 
or hope that someone moves the packages on Ubuntu/Fedora's repository 
to something more than 2.4.4


I haven't tried to install it, but Fedora rawhide has bacula 3.0.2--so 
it seems that F12 is going to get 3.0.2, at least.  If the RPMs that 
Scott (?) builds include one for your release, they work quite well, but 
even if the binary RPMs aren't available for your release, it isn't that 
hard to build from the source RPM--I've been doing that for various 
reasons over the last couple of years--but you may have to lie to 
rpmbuild about what version you are building for (I tell rpmbuild I'm 
building for f10, when I'm actually running f11, for example).


Hope this helps,

-se
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuring SpoolData?

2008-01-20 Thread Steve Ellis
Arno Lehmann wrote:
 Hi,
 
 19.01.2008 15:52, Dan Langille wrote:
 Jesper Krogh wrote:
 Hi.

 I'd like to configure this simple policy:

 Bacula should SpoolData = No if it is doing a Full backup (even if it 
 has been automatically upgraded from Incremental), otherwise it should 
 use SpoolData = Yes.

 Is that possible?
...
 I think Ryan's suggestion is the way to go - use different storage 
 devices, and set these up with and without spooling. Then, define 
 which pools to use for which level in the job resource - 'incremental 
 pool' etc. and in the pool definitions, assign the storage device to 
 use. That's what I do... but I don't want the data to end up on the 
 same devices, in the end... If you want to save to disk, this is not a 
 problem, you can share a directory among many storage devices. I'm not 
 sure I'd try this with tape, though...
 
 Arno
 

Actually, you don't need to set up different storage devices--as I 
recall, this can make restores complicated (I don't think bacula likes 
to mix storage devices on a restore).  You can just override SpoolData 
in the Schedule if you like, like this:

Schedule {
   Name = Home
   Run = Pool=Yearly jan 1st sun at 2:10
   Run = feb-dec 1st sun at 2:10
   Run = Pool=Weekly 3rd 5th sun at 2:10
   Run = Level=Differential Pool=Weekly SpoolData=Yes 2nd 4th sun at 2:10
   Run = Level=Differential Pool=Daily SpoolData=Yes wed at 2:10
   Run = Level=Incremental Pool=Daily SpoolData=Yes mon-tue thu-sat at 2:10
}

This is an example from my bacula-dir.conf--I do annual full backups to 
a pool that never gets recycled, monthly fulls to a monthly pool with a 
retention period of 15 months, 1 or 2 fulls/month to a weekly pool 
(retention period 3 months??), etc, etc.  In this case, the Job (or 
JobDefs) specifies not to spool data, and I override it with all 
non-Full jobs in the schedule.  Has been working for me for about 2 
years. (bacula ~1.38.?? to 2.2.7)

Hope this helps,

-se

-se
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup tapes at home

2007-03-04 Thread Steve Ellis
Bill Moran wrote:
 Alan Brown [EMAIL PROTECTED] wrote:

 Our current main fire/data safe is a Phoenx Data commander 4623, which is 
 capable of taking 720 LTO tapes in current configuration (39 per drawer, 
 cased, increasing to 45 uncased)

 See http://www.phoenixsafeusa.com/
 or http://www.phoenixsafeusa.com/us/viewproduct/4620_data_commander.html

 This cost a shade under US $10,000 with tax and delivery included.
 
 Interesting, but I suspect that's a little out of the price range for
 someone wanting to protect their data at home.  I know I wouldn't
 even know where to put such a thing at my house.
 
 It'd be cheaper to just rent a safe-deposit box.
 

It may not help very many people, but for anyone who already has a fire 
safe at home (not data grade), there are media cooler units (schwab 
makes the one that I found) that you can put inside, I recently 
discovered.  I've been using my fireproof gun  safe to store my backup 
media, hoping that by putting the media on the bottom, it might survive 
the internal temperatures of the safe in a fire.  However, with a media 
storage unit inside my safe (~$300, capacity for ~10 LTO carts) I could 
also store media safely.  Apparently, these can also fit into fireproof 
file cabinets that aren't data grade--something you might be able to 
pick up used.

-se

-se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with 1.38.6 under Fedora Core 5 ?

2006-04-05 Thread Steve Ellis

Dan Langille wrote:
 On 5 Apr 2006 at 0:32, Wolfgang Denk wrote:

 Hello,

 is anybody running Bacula 1.38.6 (28 March 2006) under Fedora Core 5?
 Since updating from FC4 to FC5 and from Bacula 1.38.5 to  1.38.6  (in
 one step, probably this was not a good idea), each run of our nightly
 backup jobs will fail like this:


I just switched to FC5 too, only I did not change my bacula version at the
same time.  The problem shows up on FC5 with 1.38.5 too.  And to answer
the question Kern raised about old mysql versions or old RPM--I installed
FC5 from scratch (i.e. I didn't upgrade, I reinstalled, wiping every
system partition), and I rebuilt the bacula RPMs from the source RPM after
I upgraded.

 However, restarting the bacula DIR *does* solve the problem, for  one
 day or so - I can run my backups manually, but the next scheduled run
 will fail with the same errors.

 For the gone away problem does the FAQ help?

http://paramount.ind.wpi.edu/wiki/doku.php?id=faq

 MySQL Server Has Gone Away


I'm guessing that Dan's suggestion is right on target.  The wait_timeout
is 8 hours, I'm guessing that bacula (in my config) doesn't issue queries
that often (I do backups between 3AM-7AM), and for some reason with mysql
5 the connection gets dropped after a timeout.  I've changed my wait
timeout to 36 hours to see if the problem goes away (in /etc/my.cnf). 
Perhaps someone else who understands mysql better than I can find out why
the connection didn't used to timeout (at least with mysql 4)

-se

-- 
-se

Steve Ellis


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Network Buffer Size warning

2005-12-14 Thread Steve Ellis
This should probably work its way into the manual, but a warning for
anyone who tries to move to significantly larger Maximum Network Buffer
Size numbers:

At least in bacula-1.38, Maximum Network Buffer Size _must_ be less than
51, or restores will crash the storage daemon.  I was playing with
Maximum Network Buffer Size and Maximum Block Size recently, and hit this
problem.

I naively assumed it would be beneficial to have 2 Maximum Block Size
bytes of data buffered, and was trying to see if increasing the Maximum
Block size helped my storage daemon throughput.  Some testing with btape
suggested that there was a performance benefit with larger block sizes
(although since that isn't 'real' data it might not be correct).

Anyway, I happily configured my bacula installation for 262144 maximum
block size and 524288 maximum network buffer size (256K and 512K). 
Backups seemed to be fine, but when I tried to restore, the storage daemon
kept crashing--I tracked it down to an ASSERT in src/stored/record.c
(about line 494), which presumably exists to prevent a malformed block
from chewing up too much memory.

I've now changed my maximum network buffer size to less than 51, but
now I'm uncertain what value for network buffer size leads to the best
performance--it may be that it is best if it is a precise multiple of
(Maximum Block Size - block overhead).  At any rate, without a source
change, restores won't work if it is more than 51--and, it may be that
any performance gains from using a larger than default block size are not
worth the trouble--at least now I have a work-around if I need to restore
my most recent set of backups.

-se



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Different and undesirable behavior with 1.38 than with 1.36

2005-11-13 Thread Steve Ellis

Kern Sibbald said:
 Hello again,

 You didn't by any chance recently upgrade from a 2.4 kernel to a 2.6
 kernel
 did you?I am seeing all kinds of hangs and other funny behavior in the
 Storage daemon due to the change in the behavior of the open() call for
 tape
 drives from one kernel to another.

Thanks for looking at this so quickly Kern-

No, I am running a 2.6 kernel, but I have been running it for 18 months or
so.  I'm running a vintage Fedora Core 2 release--too lazy (and afraid) to
upgrade on this system that is critical to my home network.  There has not
been a new Core2 kernel in quite some time--my last kernel upgrade was in
March, which I'm positive I was running, at least by August (I know I
rebooted about that time).

I'm a networking software engineer, so although I have a lot of capability
to maintain, fix and debug a lot of stuff here at home, I don't have much
in the way of spare time--consequently, I tend to keep using things if
they are still working.  I did want to switch to Bacuala 1.38, LTO2 and
Fedora Core4, but have so far only done the first upgrade (bacula).  I saw
messages on bacula-users about recent 2.6 changes, and was hoping that any
dust would have settled by the time I got there (presumably when I get
around to FC4--or FC5, if I continue to put it off any longer).

If it would help, I can turn on some sd logging, or something.  The poll
interval suggestion will probably work for me for now, especially once I
get the LTO2 drive online, making nearly all of my backups a 1 tape
affair.

Thanks!

-- 
-se

Steve Ellis


---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42 plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Access denied errors in XP

2005-05-10 Thread Steve Ellis

Joshua Kugler said:
 I'm not sure if this is Bacula related or not, but I'm going to try...

 I just restored some files from a backup to a new Windows XP box, and now
 when
 I go to access those files, I am getting access denied errors.  Looking at
 the security information on those files, it would appear they are owned by
 users that no longer exist.  And when I try to add Administrator as a
 user, I
 am still getting Access Denied errors (as in, I can't change the security
 settings on these files).  So, it would appear even an administrator
 account
 isn't all powerful.  Any recommendations on how to bypass windows
 security
 and get these files deleted?


I believe there is a trick that will work--you should be able to change
ownership of the files to Administrator, once you've done that you can
change attributes as you desire.  Properties/Security Tab/Advanced
Button/Owner Tab should get you where you need to go.  Be sure to click on
the Replace owner on subcontainers... box if what you are changing is a
directory.

 Obviously Bacula could read these files (on the old system) and write them
 (on
 the new system), so I do I get the permission I need to delete them?


If the machine is not part of a domain, then you created a new and
different user that just happens to have the same username.  Windows uses
something called, I believe, a GUID (globally unique identifier) for
users--one property of a user is the username, however, access rights are
controlled on a GUID basis.  The 'standard' users (among them
Administrator), I believe always wind up with the same GUID.  When logged
in as Administrator, try to change ownership to administrator, then you
should be able to grant full control to another user (and have him claim
ownership of the files, if necessary).

I'm not a Windows guru (and don't want to be), so I can't be held
accountable for bad advice, but something close to this worked for me.

Hope this helps,

-se

 --
 Joshua Kugler
 CDE System Administrator
 http://distance.uaf.edu/


-- 
-se

Steve Ellis


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users