[Bacula-users] Restore problem - dir tells client wrong sd

2013-04-18 Thread Chris Adams
I'm trying to do a restore of a directory of files.  The restore gets a
few files from one volume (disk volumes), but then stops with the
director shows the job waiting on the client to connect to the storage
daemon.

I traced the problem down to the director telling the client to connect
to the wrong storage daemon address.  My backup server connects to
several VLANs, and I have a storage definition for each (different IPs).
When the director tells the client the first volume, it uses the correct
storage daemon address, but for the second volume, it sends the client
an address that the client can't reach.

If I change the restore job to send the restore to a directory on the
backup server (which of course can connect to all the storage daemon
IPs), it works.

This appears to be a bug in the director.  It uses the specified storage
daemon for the first volume, but it used a different one for the second
volume.

This was all with Bacula 5.2.12 on RHEL 6 (backup server) and RHEL 5
(client).  I did try upgrading the backup server to Bacula 5.2.13, but I
get the same behavior.

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includes APIs for building
apps and a phenomenal toolset for data science. Developers can use
our toolset for easy data analysis  visualization. Get a free account!
http://www2.precog.com/precogplatform/slashdotnewsletter
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows client backup: VSS and/or ntbackup?

2013-03-21 Thread Chris Adams
Once upon a time, Christoph Litauer lita...@uni-koblenz.de said:
 This works but has two ceveats: ntbackup runs several minutes even on an 
 unused system and the resulting backup is severeal gb every day.

It sounds like you are not excluding things you should exclude.  An
incremental backup of an unused system should not be several GB/day.

Something like this should help (but remember that changing the
include/exclude will cause the next backup to be promoted to a full
backup):

  Options {
# Windows is case-preserving, NOT case-sensitive
Ignore Case = yes
Exclude = yes

# general Windows stuff
wilddir = c:/Windows/$NtServicePackUninstall$
wilddir = c:/Windows/$NtUninstall*
wilddir = c:/Windows/Installer/$PatchCache$
wilddir = c:/Windows/ServicePackFiles
wilddir = c:/Windows/SoftwareDistribution/Download
wilddir = c:/Windows/Temp

# per drive
wilddir = [a-z]:/$Recycle.Bin
wilddir = [a-z]:/RECYCLER
wilddir = [a-z]:/System Volume Information
wilddir = [a-z]:/pagefile.sys
  }

The biggest things that will hit incremental backups of unused systems
are the last two in the list.  The System Volume Information directory
is where VSS information is stored, so it can be large during the
backup.  Also, any activity (such as running ntbackup) can cause changes
in pagefile.sys, which will cause the whole file to be included in the
backup (and it can also be big).

Now, I have a question of my own related to this: ntbackup is gone in
newer versions of Windows.  What are people using to do the equivalent
of ntbackup backup systemstate there?
-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Let copy job use available tapes?

2012-12-10 Thread Chris Adams
I am trying to use a tape library for a copy job (for off-site backups).
I have 3 magazines of tapes, and I planned to just rotate between them
for 3 weeks of off-site backups.  The first week went fine, but I can't
get the second week to work.

The problem appears to be that Bacula wants to use specific tapes,
rather than just using tapes that are in the magazine currently loaded.
For example, the first copy job used BVQ900-BVQ905 (bar code labels),
out of a magazine with 900-914.  I loaded the second magazine, with
915-929, but Bacula just keeps requesting BVQ906.  How can I get it to
see what is available and just use that?

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow disks RAID10 vs Fast disks RAID5

2012-11-22 Thread Chris Adams
Once upon a time, John Kenyon jken...@bgwgroup.com.au said:
 2 x 300GB 15K SAS (RAID 1) - This will have OS (centos 64 bit) and Bacula 
 MySQL catalog 

I have a new server running RHEL 6 with 15K SAS RAID1 for the OS.  It
has an external disk shelf with 12 2TB 7200 RPM SATA drives connected to
a regular (non-RAID) SAS controller.  I'm using Linux software RAID; I
have two RAID5 sets and a RAID0 on top of that (Linux doesn't support a
direct RAID50, but you can get there with RAID-on-RAID).  I have a
Postgres catalog and the Bacula file backup on the array.

The servers I'm backing up are all connected to gigabit switches, using
a dedicated VLAN for backups (each server has at least two NICs).

When the full backup runs, the backup server hits 950 megabits per
second when all the jobs are running.  Most don't last long (a bunch of
servers with less than 5-6GB each), so then the network throughput
drops.

Even when the network is running at full speed, I'm not seeing anything
on the backup server that looks like an I/O bottleneck.

One note on the storage: with RAID50, I get an effective space of almost
20TB.  However, ext4 can't make a filesystem that big (the disk format
and kernel driver support it, but mke2fs didn't until very recently).
RHEL puts XFS support in a layered product (extra cost), so I'm not
using that.  I just partitiond the software RAID device into 16TB and
the rest and am just using the 16TB filesystem right now (the rest is
there for extra space as needed for random stuff).

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Input on backing up a SAN

2012-11-14 Thread Chris Adams
Once upon a time, David Palmer dwpal...@cazenovia.edu said:
 I am looking at ways to speed up our backups as our data continues to 
 grow. We currently have an equallogic group that stores much of our 
 information. One idea that we had for speeding up backups is to have a 
 file server that is separate from our production file servers and 
 mounting the data volumes as read only for backups. I am not planing to 
 use this for anything but file servers (ie sql / mail).
 
 My question is has any one done this or something similar? Are there any 
 disadvantages that by configuring it this way for our file servers?

I back up a volume from an Equalogic SAN like this.  You do have to be
careful; you should limit visibility of the base volume to the active
server and the snapshot to the backup server, or odd things can
happen (depending on OS, filesystem, etc.).

You also need a way to create a consistent filesystem snapshot.  For
example, my volume is used by a Linux server and contains an LVM volume.
My snapshot script connects to the server and creates an LVM snapshot
first, connects to the SAN to create a SAN snapshot, and then removes
the LVM snapshot on the server.  The SAN snapshot volume is then mounted
on the backup server.

Without LVM, Linux also has the fsfreeze command to sync a filesystem
(flush all pending writes and hold new writes).  You could freeze the
FS, create the SAN snapshot, and unfreeze.

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Input on backing up a SAN

2012-11-14 Thread Chris Adams
Once upon a time, Phil Stracchino ala...@metrocast.net said:
 You also mentioned SQL/mail.  You should not be backing up SQL DBs at
 the file level.  Doing so does not guarantee a consistent backup of the
 DB because there is time skew between the beginning and end of the DB
 backup (and can be time skew even within individual tables).  If all
 you're doing is a file-level backup of the DB data directories, you
 almost certainly only *think* you have backups of your DB.

If you don't want to go for DB-specific backup tools, you can get a
consistent backup of most databases with a filesystem or volume
snapshot, as long as you prepare the database for the snapshot.

For example (on the small DB end), MySQL has a command to freeze and
flush in-memory buffers, flush tables with read lock.  You can have a
script that connects to MySQL, runs the command, snapshots the
filesystem or volume, and then runs unlock tables.  A backup of the
snapshot should give you a consistent MySQL backup.

Other databases have similar (sometimes more complicated) methods that
don't require a full dump.
-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy job use multiple drives (and scheduling)?

2012-11-09 Thread Chris Adams
I am setting up a disk-to-disk backup system with a weekly copy-to-tape
job (for offsite backups).  I have a couple of questions:

- I have a tape library with 2 drives.  Is there an easy way to get the
  copy job to use both?  I have set them up in an autochanger resource,
  but it looks like a single job will still only use one drive.  I'm
  selecting what to copy to tape with Selection Type = Pool Uncopied
  Jobs.

- With a single drive, it'll take a long time to copy from disk to tape
  (I'm estimating 20 hours), and that means that the copy job probably
  won't finish before the next night's incremental jobs are supposed to
  start.  If I understand the job scheduling correctly, Bacula won't
  start the new backup jobs until the (lower priority) copy job
  finishes.

  I don't normally want to run different priorities simultaneously (for
  example, I've got end-of-the-night's-jobs at a low priority), but I'd
  like for the copy to continue but let nightly disk-to-disk jobs run
  (the disk storage should have plenty of I/O capacity).

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Splitting a job into two parallel jobs

2012-10-31 Thread Chris Adams
I'm trying to back up a relatively large filesystem (over 500G in use)
that is running somewhat slow.  With my previous backup software, I
divided it up into multiple parallel backup jobs, and that worked.  I'm
trying to implement something similar with Bacula 5.2 on RHEL 6 (backing
up to separate disk array).

The particular filesystem is ext3 on LVM on a SAN, and I have a script
that:

- logs into the server
- snapshots the LVM
- logs into the SAN
- snapshots the volume
- deletes the LVM snapshot on the server
- mounts the LVM-snapshot-on-SAN-snapshot on the backup server

There's also a post-backup script to unmount the snapshot and delete it
from the SAN.

I could split up the job with two filesets, with an include/exclude set
up to split (for example, back up directories a-m in one job and n-z in
the other).  However, how can I get the snapshot scripts run just once,
before both jobs start and after both jobs finish?

I was looking at having a parent job that uses RunScripts to run the
snapshot script, run the two jobs, wait, and then run the snap-del
script, but it looks like run isn't an allowed console command.

I have a similar issue with a Windows server, where I've split a backup
into two jobs to run in parallel.  The problem there is that they both
show as running, but only one actually runs at a time (the other just
seems to sit there waiting, per status client=foo and status
storage=bar in bconsole).  Is this due to VSS?  Is there something
locking and only allowing one access at a time?  I have raised Maximum
Concurrent Jobs, so that isn't the problem.

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Splitting a job into two parallel jobs

2012-10-31 Thread Chris Adams
Once upon a time, lst_ho...@kwsoft.de lst_ho...@kwsoft.de said:
 What about using more than one snapshot? If done right snapshots are  
 cheap, so you should have no problem to take a snapshot/mount per job  
 to get two jobs running. But i doubt you will gain much with parallel  
 backups on one filesystem/io-channel anyway.

I'll have to check the space (not sure it is available right now).

I do know that my old backup software did run the same backup (with
parallel trees) somewhat faster (and that was to tape rather than disk).

 Currently with VSS you have a limit of one job per windows machine  
 running, as we already find out :-(

I thought that might be the issue.  I may be able to work around this by
not using VSS for these particular trees, at least for testing.

 But as said if you don't have independent io-channels or you are CPU  
 bound (data encryption) you won't gain very much by running more than  
 one job concurrently.

I used multiple parallel trees with my previous backup software, in part
because it seemed to have a high overhead with lots of files (one of
these servers has a Maildir store with a couple of million files).
Those backups did finish faster (as part of my overall network backup to
tape) than my current test backups (just running a single job to disk,
with an upgraded backup server and upgraded switches in-between the
servers).  That's what got me looking at trying to run parallel jobs
with Bacula (at least to test out the speed to see if it helped).

Thanks.
-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users