[Bacula-users] Writing to files being backed up.

2020-02-11 Thread Paul G Morris
During a backup session if someone is writing to a directory that is currently 
being backed up how does Bacula handle the changes being made?

I have a differential backup session running on a system that the user is 
constantly updating files. The directory contains ~3TB worth of data and it is 
something not necessarily needing backup. I’ve hit my limit on the number of 
volumes available for that pool and have to keep creating new labels manually. 
If I change the fileset to exclude the directory and do a reload in the console 
will that take effect immediately and stop the backup? Maybe redefine the pool 
limits temporarily until the backup completes (if it will)?

I’m running Bacula version 5.2.13 and I don’t see an easy way to stop the job 
except to restart the director.

Any suggestions would be helpful.

Paul


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] waiting to reserve device messages after upgrade to 9.4.2 from 7.4.4

2019-09-04 Thread Paul Allen
= 5368709120 #5G
}

#
# Send all messages to the Director,
# mount messages also are sent to the email address
#
Messages {
  Name = Standard
  director = (REDACTED) = all
}

-

Example Client config:

Client {
  Name = (REDACTED)-files-fd
  Address = (REDACTED)
  FDPort = 9102
  Catalog = MyCatalog
  Password = "(REDACTED)"  # password for FileDaemon
  Maximum Concurrent Jobs = 3
  # TLS Config / use Client certificate for outbound connections
  TLS Enable = yes
  TLS Require = yes
  TLS CA Certificate File = /etc/bacula/ssl/(REDACTED).pem
  TLS Certificate = /etc/bacula/ssl/(REDACTED).crt
  TLS Key = /etc/bacula/ssl/(REDACTED).key
}

Job {
  Name = "(REDACTED)-files-backups"
  JobDefs = "DefaultJob"
  Client = (REDACTED)-files-fd
  Schedule = "MonthlyCycle-1"
  FileSet = "(REDACTED)-files-fileset"
  Pool = "Weekly"
}

FileSet {
  Name = "(REDACTED)-files-fileset"
  Include {
    Options {
  signature = MD5
  xattrsupport = yes
    }
#    File = /shared/
    File = /storage/backups/
  }
  Exclude {
    File = /.snap
  }
}

-- 
Paul Allen

Inetz Sr. Systems Administrator



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring from old backups

2018-01-24 Thread Paul Fontenot
Thats the program complaining about the configuration

On Jan 21, 2018 09:07, "Heitor Faria" <hei...@bacula.com.br> wrote:

> Hello, Paul,
>
> Well, let me be more specific. I have backups on a removable USB drive
> that I need to look at before I scrub the drive. The problem I am having is
> the original server that wrote these backups is long gone. What is the
> easiest way to get to this data? I’ve tried everything I can think of and
> it keeps failing with an error concerning the configuration but I’m not
> sure I know what configuration it’s looking for – I’m at the “can’t see the
> forest for the trees” point.
>
> Maybe you are looking for the bls program.
>
> Regards,
> --
> 
> ===
> Heitor Medrado de Faria | CEO Bacula do Brasil & USA | Visto EB-1 |
> LPIC-III | EMC 05-001 | ITIL-F
> • Não seja tarifado pelo tamanho dos seus backups, conheça o Bacula
> Enterprise http://www.bacula.com.br/enterprise/
> • Ministro treinamento e implementação in-company do Bacula Community
> http://www.bacula.com.br/in-company/
> Brazil +55 (61) 98268-4220 <+55%2061%2098268-4220> |  USA  +1 (323)
> 300-5387 <(323)%20300-5387> | www.bacula.com.br
> 
> 
> Indicamos as capacitações complementares:
> Shell básico e Programação em Shell <http://www.livrate.com.br/> com
> Julio Neves | Zabbix <http://www.spinola.net.br/blog> com Adail Host.
> 
> 
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring from old backups

2018-01-21 Thread Paul Fontenot
Well, let me be more specific. I have backups on a removable USB drive that
I need to look at before I scrub the drive. The problem I am having is the
original server that wrote these backups is long gone. What is the easiest
way to get to this data? I've tried everything I can think of and it keeps
failing with an error concerning the configuration but I'm not sure I know
what configuration it's looking for - I'm at the "can't see the forest for
the trees" point.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring from old backups

2018-01-07 Thread Paul Fontenot


smime.p7m
Description: S/MIME encrypted message
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] opportunistic backup?

2016-11-17 Thread Paul J R
On 17/11/16 07:26, Phil Stracchino wrote:
> On 11/16/16 09:12, Paul J R wrote:
>> Hi All,
>>
>> I have a data set that i'd like to backup thats large and not very
>> important. Backing it up is a "nice to have" not a must have. I've been
>> trying to find a way to back it up to disk that isnt disruptive to the
>> normal flow of backups, but everytime i end up in a place where bacula
>> wants to do a full backup of it (which takes too long and ends up
>> getting cancelled).
>>
>> Currently im using 7.0.5 and something i noticed in the 7.4 tree is the
>> ability to resume stopped jobs but from my brief testing of it I wont
>> quite do what Im after either. Ideally what im trying to achieve is give
>> bacula 1 hour a night to backup as much as it can and then stop. Setting
>> a time limit doesnt work cause the backup just gets cancelled and it
>> forgets everything its backed up already and tries to start from scratch
>> again the following night.
>>
>> VirtualFull doesnt really do what im after either and i've also tried
>> populating the database directly in a way that makes it think its
>> already got a full backup (varying results, and none of them fantastic).
>> A full backup of the dataset in one hit isnt realistically achievable.
>>
>> Before I give up though, im curious if anyone has tried doing similar
>> and what results/ideas they had that might work?
>
>
> This is a pretty difficult problem.  To restate the problem, it sounds
> like you are trying to create a consistent full backup, in piecewise
> slices an hour or two at a time, of a large dataset that is changing
> while you're trying to back it up - but without ever actually performing
> a full backup.  The problem with this is that you need to be able to
> keep state of a stopped backup for arbitrary periods, and at the same
> time keep track of whether there have been changes to what you have
> already backed up, and you don't have a full backup to refer back to.
>
>
> The only thing I can think of is, is the dataset structured such that
> you could split it [logically] into multiple chunks and back them up as
> separate individual jobs?
>
>
>

I dont need a consistent full, any backup it manages to do is a plus. 
I've tried splitting it into multiple job sets but the way the dataset 
changes makes it fairly resistant cause the data changes its name as it 
gets older.

Recovering the data when it gets broken isnt difficult (the last time 
was via a windows machine connected to it with samba that got 
cryptolocker'ed), it just gets synced back across the internet (which is 
a little painful on the internet link for a couple of days). Any backup 
data I have is simply a plus that means that particular chunk of data 
doesnt have to be re-pulled. Unfortunately unless it completes a backup 
it doesnt even record what it managed to backup:

This backup had run for four hours and chewed thru quite a decent chunk 
of data, but it never recorded the files it backed up (though this was 
only a test):

+---+---+-+--+---+--+--+---+
| JobId | Name  | StartTime   | Type | Level | 
JobFiles | JobBytes | JobStatus |
+---+---+-+--+---+--+--+---+
| 38| NAS02-Job | 2016-11-15 17:33:24 | B| F | 
0| 0| A |

Part of the reason why I was hoping to do it with bacula is it'll keep 
some of the history that gets lost when it is reconstructed from bare 
metal (which really isnt important either in reality, just handy to have).

One thing i did try was moving the data to be backed up out of the way, 
getting bacula to run (which completed the full backup) then moving the 
data back into place which then meant bacula did the next one as 
incremental, but that really didnt have any success either as it didnt 
complete the incremental and didnt record what it backed up.





--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] opportunistic backup?

2016-11-16 Thread Paul J R
Hi All,

I have a data set that i'd like to backup thats large and not very 
important. Backing it up is a "nice to have" not a must have. I've been 
trying to find a way to back it up to disk that isnt disruptive to the 
normal flow of backups, but everytime i end up in a place where bacula 
wants to do a full backup of it (which takes too long and ends up 
getting cancelled).

Currently im using 7.0.5 and something i noticed in the 7.4 tree is the 
ability to resume stopped jobs but from my brief testing of it I wont 
quite do what Im after either. Ideally what im trying to achieve is give 
bacula 1 hour a night to backup as much as it can and then stop. Setting 
a time limit doesnt work cause the backup just gets cancelled and it 
forgets everything its backed up already and tries to start from scratch 
again the following night.

VirtualFull doesnt really do what im after either and i've also tried 
populating the database directly in a way that makes it think its 
already got a full backup (varying results, and none of them fantastic). 
A full backup of the dataset in one hit isnt realistically achievable.

Before I give up though, im curious if anyone has tried doing similar 
and what results/ideas they had that might work?

Thanks in advance!





--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Typical tape write performance

2015-12-02 Thread Paul Elliott
On Wed, 02, Dec, 2015 at 03:04:07PM +, Alan Brown spoke thus..
> On 02/12/15 12:14, Paul Elliott wrote:
> >>  Maximum block size = 2M
> >Have you experienced any issues with that block size?
> Only when the drives/tapes are dirty and then you get excess errors on all
> block sizes anyway.
> Because entire blocks are rewritten if there is an error, "tape waste" goes
> up, but even with small blocks I've seen 75% of a tape lost to rewrites.
> You do need to watch those stats and react quickly if they're climbing -
> where react quickly == put a cleaning tape in the drive asap.

How are you currently monitoring the error counts? I've not found a
useful solution on Linux but maybe I'm missing something!

> Unfortunately Bacula does not lend itself well to such interventions.

Agreed. We had to turn off auto cleaning on our tape library as it would
conflict with bacula's operations which is a shame. If we could find a
way to query the drive to see if it requires a cleaning then it should
be possible for us to load a cleaning tape using a pre-job script.
> >What's the advantage of the maximum job spool size setting?

> These are tunables.
 
> I've also got a run-before script which tests spool space and holds the job
> if it's over 70% (sleep on failure, then loop and test again). We get better
> overall throughput this way than by letting jobs fight over the spool.

That's an interesting way of dealing with concurrency issues. Neat.

Thanks, Paul.

-- 
Paul Elliott, UNIX Systems Administrator
York Neuroimaging Centre (YNiC), University of York


--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Typical tape write performance

2015-12-02 Thread Paul Elliott
Hi Alan,

On Wed, 02, Dec, 2015 at 11:46:40AM +, Alan Brown spoke thus..
>You should also consider doing the following in bacula-sd tape drive
>stanzas
>  Maximum File Size = 16G
>  Maximum Network Buffer Size = 262144
>  Maximum block size = 2M

Have you experienced any issues with that block size? We're using LTO5
and will be moving to LTO6 in the next few weeks but we're still using a
256k block size based on this (possibly outdated) advice from Kern:

https://adsm.org/lists/html/Bacula-users/2013-09/msg00104.html

I would be interested to hear what block sizes other LTO5/6 users are
using?

>These are local-specific (my spool has up to 20 jobs feeding into 280GB of
>available space) and if you have more space/fewer jobs I'd consider
>bumping the Job spool size anything up to 100GB
>  Maximum Spool Size = 120G
>  Maximum Job Spool Size = 30G

What's the advantage of the maximum job spool size setting? We're not
using any spooling at the moment so I'm interesting in what parameters
would be best for our setup. (if any..)

Thanks, Paul.

-- 
Paul Elliott, UNIX Systems Administrator
York Neuroimaging Centre (YNiC), University of York


--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula for OSX 10.9

2014-09-07 Thread Paul Mather
On Sep 7, 2014, at 5:42 AM, Kern Sibbald k...@sibbald.com wrote:

 On 09/07/2014 07:33 AM, Eric Dannewitz wrote:
 I'm interested in perhaps deploying this in my k-8 school, but I have not 
 found a good tutorial of how to install it. Or if it even works right on Mac.
 
 Anyone have some insights on this? My idea would be to back about 30 macs to 
 an Ubuntu server.
 
 This would be a good way to setup Bacula.  The Director, SD and catalog
 work well on a Ubuntu server -- I recommend Trusty (14.04).  For the
 Mac's someone probably has made the binaries and distributes them on the
 Internet.  Otherwise if you load all the appropriate build tools on the
 Mac, you can easily build the FD.   Later this year, Bacula Systems will
 provide free binaries for MacOSX which should also help.

I've not used Bacula on a Mac, but I do notice that Homebrew 
(http://brew.sh) has a formula for bacula-fd, which could be used to 
install the client.  Right now, it's only for the 5.x version (5.2.13), 
though.

I hope this helps.

Cheers,

Paul.


--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy job limit is 100?

2014-01-10 Thread Paul De Audney
On 9 January 2014 12:54, Steven Hammond shamm...@technicalchemical.comwrote:

 I missed a couple of days (holidays) backing up from disk to tape (we
 backup disk to disk every night) so when I went to run the job to copy disk
 to tape it only grabbed 100 jobs.  This seems sort of artificial (what if I
 had more than 100 workstations/servers I was backing up?).  I was wondering
 if there is something to set that will override that setting (I couldn't
 find one at a cursory glance).  I would prefer not to use a special query
 if possible (we are using PoolUncopiedJobs).  I know how to write a query
 and have one already that I use to see how many jobs need to be backed up.
  I'm just curious how others are getting around this artificial limit.
  Thanks.


I am currently using a custom SQL query to copy jobs. I do this because we
have different backup schedules for various systems. I find the SQL query
does give me more control over what jobs get copied and when.

Unfortunately when I originally configured bacula, I setup the backups for
incremental and full backups for all hosts to be written into a single pool
for disk based backups.
So if I use pooluncopiedjobs I will end up copying all my incremental
backups to tape.
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Client side scripts

2013-12-12 Thread Paul Fontenot
Can Bacula run a client side script? For instance, I have bacula server A
backing up bacula client B, can I specify a script to be run before the
job on client B? I've tried this by specifying the script to run in the
job block and received an error indicating the script didn't exist.


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client side scripts

2013-12-12 Thread Paul Fontenot
Thank you very much, I was calling the script incorrectly

-Original Message-
From: Juraj Sakala [mailto:juraj.sak...@gmail.com] 
Sent: Thursday, December 12, 2013 7:50 AM
To: Paul Fontenot; Bacula Users
Subject: Re: [Bacula-users] Client side scripts

Of course it can. But the script have to be on the client. In the Job
resource you specify script path, that is on the client side.

Example from job resource

  RunScript {
RunsWhen = Before
FailJobOnError = Yes
RunsOnClient = Yes
Command = /usr/lib/bacula/script.sh
  }
The path /usr/lib/bacula/ is on client not on server

On 12/12/2013 03:36 PM, Paul Fontenot wrote:
 Can Bacula run a client side script? For instance, I have bacula server
A
 backing up bacula client B, can I specify a script to be run before 
 the job on client B? I've tried this by specifying the script to run 
 in the job block and received an error indicating the script didn't exist.
 
 
 --
  Rapidly troubleshoot problems before they affect your 
 business. Most IT organizations don't have a clear picture of how 
 application performance affects their revenue. With AppDynamics, you 
 get 100% visibility into your Java,.NET,  PHP application. Start your 
 15-day FREE TRIAL of AppDynamics Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.c
 lktrk ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to test tape library with bacula

2013-08-12 Thread Paul De Audney
Hi Depak,

Have you followed the document / steps outlined in
http://www.bacula.org/5.0.x-manuals/en/problems/problems/Testing_Your_Tape_Drive.html?
Once you can tell us what you have tested and what is not working, we might
be able to provide more detailed help.

You need to make sure you can control your tape drive using the mt command
and control the library/autochanger using the mtx command.
For example I make sure that I can list all the tape slots on my autoloader
(IBM TS3310) using the following mtx command (mtx -f /dev/sgautoloader
status)
My device is using custom udev rules to ensure that the device name is the
same. So your device name will be different.


On 12 August 2013 11:59, deepak@wipro.com wrote:

  Hi Guys,

 ** **

 I want to configure Bacula to take backup from RedHat Linux servers on IBM
 tape library TS3500.

 ** **

 Please tell how I can test weather server is communicating properly with
 tape and will be able to take backup on tape after configuring Bacula.

 ** **

 Plz send some commands to test backup manually .

 ** **

 Thanks…!

 ** **

 ** **


 --
 Get 100% visibility into Java/.NET code with AppDynamics Lite!
 It's a free troubleshooting tool designed for production.
 Get down to code-level detail for bottlenecks, with 2% overhead.
 Download for free and get started troubleshooting in minutes.
 http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
Regards,

Paul De Audney
Regional System Administration Team Lead (USA/Europe)

+1 415-321-4461 Office
+1 415-426-9719 Mobile
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] One USB disk per day

2013-06-25 Thread Paul Johnson
On 06/14/2013 02:16 AM, Marcin Haba wrote:
 W dniu 13.06.2013 22:49, John Drescher pisze:
 On Thu, Jun 13, 2013 at 4:34 PM, arnaldojsousa
 bacula-fo...@backupcentral.com wrote:
 Hi all...

 The bacula with vchanger using external USB disks works fine, but i have 
 some doubts:

 - jobs were executed last night - no problem so I'd like to use one disc 
 per day. we have a set of 5 (Mon-fry) external USB disk 500GB

 Question 1:

 I'll have to run the command label barcodes for each magazine manually 
 every day?

 No. I would do that 1 time for each external making each external its
 own magazine.


I have two USB disks that I swap over every month or so. You could no 
doubt extend this approach to 5 disks swapped every day.

My bacula-dir.conf and bacula-sd.conf files are generated from template 
files using the following script, which I run when I swap the disks. If 
I was clever I could probably hook it into systemd to run when the right 
USB devices are detected, but its more trouble than its worth.

---8---8---
#! /bin/sh

TARGET=/etc/bacula
TEMPLATES=/etc/bacula/templates

if test -h /dev/disk/by-label/BACKUP5; then DISK=5; fi
if test -h /dev/disk/by-label/BACKUP6; then DISK=6; fi
if test -h /dev/disk/by-label/BACKUP7; then DISK=7; fi
if test -h /dev/disk/by-label/BACKUP8; then DISK=8; fi

m4 -D NUMBER=$DISK $TEMPLATES/bacula-dir.conf.m4  $TARGET/bacula-dir.conf
m4 -D NUMBER=$DISK $TEMPLATES/bacula-sd.conf.m4  $TARGET/bacula-sd.conf

service bacula-dir restart
service bacula-sd restart
---8---8---

Each USB drive has its own pair of pools; a big one for weekly backups 
and a smaller one for daily diffs, defined thus:

---8---8---
Pool {
   Name = Daily-NUMBER
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 5 days
   Storage = `File-Backup'NUMBER
   Label Format = `Daily'NUMBER-
   Maximum Volume Bytes = 20G
   Maximum Volumes = 20
---8---8---

All the jobs have names of the form happy-NUMBER, and so on. The 
current disks themselves are labelled BACKUP-7 and BACKUP-8, and the 
storage device in bacula-sd.conf.m4 looks like this:

---8---8---
Device {
   Name = `BACKUP'NUMBER
   Media Type = File
   Archive Device = `/media/BACKUP'NUMBER
   LabelMedia = yes;   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;   # when device opened, read it
   RemovableMedia = no;
   AlwaysOpen = no;
}
---8---8---

Obviously there is a lot of other stuff, but I hope this is enough to 
give you the idea.

Paul.

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recommended hardware

2013-05-03 Thread Paul Mather
On May 3, 2013, at 12:03 PM, lst_ho...@kwsoft.de wrote:

 
 Zitat von Francisco Garcia Perez fga...@gmail.com:
 
 Hello,
 I have a PowerVault 124T, but I want buy a new backup system with support
 for my old LTO-2 and LTO-3 tapes with at least 24 slots, 2 drives, a
 barcode scanner. What do you recommend me?
 
 
 If you really need support for reading your old LTO-2 tapes you are  
 limited to LTO-4 because the read compatibility is maintained two  
 steps down as far as i know. Instead of this you should rather use at  
 least LTO-5 today and copy your data still needed from the old LTO  
 tapes. As of brands recommended for tape libraries: Most of the  
 mid-sized libraries (HP, IBM, Dell) are rebranded BDTs which work well  
 with Bacula. For the tape drives some say that full-height are more  
 solid than the half-height, but i don't have first hand experience for  
 this.


Regarding the full-height vs. half-height issue, IMHO, it does benefit you to 
read the small print.

In my case, we bought a Quantum SuperLoader3 LTO-4HH SAS, 16 Slots/2 Magazine 
2U rack mount unit. I assumed at the time the HH (half-height) only affected 
the physical form factor and nothing else.  Turns out I was wrong.  Puzzled by 
the slower speeds I got with btape (65 to 78 MB/s), I took a hard look at the 
data sheet for the SuperLoader3 family[1] and the explanation leapt out at me.  
The quoted performance differs between the LTO-4HH and LTO-4 drives: 288 
GB/hour vs 432 GB/hour (native speeds).  The former equates to about 82 MB/s 
whereas the latter parallels the LTO-4 spec max native speed of 120 MB/s.

So, there was a definite difference between going half-height and full-height 
there.  Unfortunately, for their LTO-4 SAS offerings, only (slower) half-height 
was available.  Fortunately, for LTO-5 and above, there was no documented speed 
penalty for going half-height vs. full-height, so maybe Quantum have got their 
engineering sorted out re: what was affecting their LTO-4 drives?

In all other respects, though, I have been very happy with the Quantum 
SuperLoader3 LTO-4HH SAS unit.  (I realise this doesn't meet the OP's 
specifications, though.)

Cheers,

Paul.

[1] https://iq.quantum.com/exLink.asp?8357556OK69N63I33059211
--
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with 2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring to /tmp modifies permissions of /tmp directory

2013-04-30 Thread Paul De Audney
Hi List,

I'd like to confirm that restoring into /tmp it is expected behavior that
the bacula-fd program will modify the permissions of the directory from
1777 to 555.

Bacula version in use is 5.2.5-0ubuntu6.2 on Ubuntu 12.04 LTS.

Thanks!

--
Paul De Audney
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup strategy with Bacula

2013-03-27 Thread Paul Mather
On Mar 26, 2013, at 11:44 PM, Bill Arlofski waa-bac...@revpol.com wrote:

 On 03/26/13 22:33, Wood Peter wrote:
 Hi,
 
 I'm planning to use Bacula to do file level backup of about 15 Linux
 systems. Total backup size is about 2TB.
 
 For Bacula server I'm thinking to buy Dell PE R520 with 24TB internal
 storage (8x 3TB disks) and use virtual tapes.
 
 Hi Peter.   First, you're going to want some RAID level on that server and
 RAID0 is not (IMHO) RAID at all.  :)
 
 At a minimum you're going to want to set up (for example) RAID5 in the server
 which will give you a maximum of 7 x 3 = 21 TB since the equivalent of 1 full
 drive is used for parity, spread across all of the drives in a RAID5 array.
 
 Having said that, RAID5 does not have the best write speeds, but other RAID
 levels that will give more redundancy and better write speeds will use
 significantly more drives and give you less total storage. You may need to
 spend some time to consider your redundancy and read/write throughput
 requirements.
 
 Also, you will generally want to configure at least one drive as a hot spare
 so the RAID controller can automatically and immediately fail over to it in
 the case of a drive failure in the array.
 
 So that takes away at least 3 more TB, so now your down to 18TB total storage
 with a minimally configured RAID5 array with 1 hot spare.
 
 Just some things to consider. :)


Another thing to consider is that with large capacity drives (3 TB) combined 
into large RAID-5 arrays there is an increased likelihood of a catastrophic 
array failure during an array rebuild due to an initial drive failure.  For 
this reason, RAID-6 would be preferred for such large arrays.

Reliability of large RAID arrays is one of the motivations behind raidz3 
(triple-parity redundancy) in ZFS.  See, e.g., 
http://queue.acm.org/detail.cfm?id=1670144 for details.

Cheers,

Paul.


--
Own the Future-Intelreg; Level Up Game Demo Contest 2013
Rise to greatness in Intel's independent game demo contest.
Compete for recognition, cash, and the chance to get your game 
on Steam. $5K grand prize plus 10 genre and skill prizes. 
Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Requested feature: use UDT for bulk data transfer

2013-03-18 Thread Paul Johnson
Like others, I have been having problems with random backup failures due 
to network IO errors.  See this thread for another example:

http://bacula.10910.n7.nabble.com/Win32-FD-Write-error-sending-N-bytes-to-Storage-daemon-td35109.html

I understand that this is not a bug in Bacula: its going to be a problem 
somewhere in the stack on either the client or the storage machine. As 
someone notes in the thread above, because Bacula transports hundreds of 
gigabytes over a single TCP channel, something that happens only once 
every hundred GB has a high probability of breaking a backup while being 
completely unnoticeable anywhere else. Maybe its a dodgy network card, 
maybe something in the TCP/IP stack on Windows XP. But that doesn't help 
solve the problem; I can't afford to swap components until I perturb 
this out of existence (I've tried the usual remedies). Therefore I'd 
like to suggest that Bacula support a transport mechanism other than TCP.

UDT (http://udt.sourceforge.net/) looks ideal. Its got a mature 
implementation for Linux, Windows, OS X and BSD. The API is close to a 
drop-in replacement for TCP, so not a lot of new code, and it could be 
made an option with TCP remaining the default. UDT is designed for high 
volume data transfer, and is probably more efficient than TCP over a 
LAN, which is the typical use-case for Bacula.

A Google search for Bacula UDT turned up nothing, which is surprising. 
Am I really the first person to suggest this?

Thanks to the devs for a great program.

Paul.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job canceling tip

2013-03-05 Thread Paul Mather
On Mar 5, 2013, at 9:54 AM, Dan Langille d...@langille.org wrote:

 On 2013-03-04 04:42, Konstantin Khomoutov wrote:
 On Mon, 4 Mar 2013 08:45:05 +0100
 Geert Stappers geert.stapp...@vanadgroup.com wrote:
 
 [...]
 Thank you for the tip. I want to share another.
 It is about canceling multiple jobs. Execute from shell
 
   for i in {17..21} ; do echo cancel yes jobid=404${i} | bconsole ;
 done
 
 Five jobs, 40417-40421, will be canceled.
 
 A minor nitpick: the construct
 
 for i in {17..21}; do ...
 
 is a bashism [1], so it won't work in any POSIX shell.
 
 A good point! I tried the above on FreeBSD:
 
 $ cat test.sh
 #!/bin/sh
 
 for i in {17..21} ; do echo cancel yes jobid=404${i} ; done
 
 
 [dan@bast:~/bin] $ ./sh test.sh
 cancel yes jobid=404{17..21}
 [dan@bast:~/bin] $
 
 
 A portable way to do the same is to use the `seq` program
 
 for i in `seq 17 21`; do ...
 
 or to maintain an explicit counter:
 
 i=17
 while [ $i -le 21 ]; do ...; i=$(($i+1)); done
 
 Then I tried this approach but didn't find seq at all.  I tried sh, 
 csh, and tcsh.


Seq appeared in FreeBSD 9, so if you tried it in earlier versions that's 
probably why you didn't find it.

Using seq, you might have to use -f %02g to get two-digit sequences with 
leading zeros (or -f %0Ng to get N-digit sequences with leading zeros).


 But I know about jot.  This does 5 numbers, starting at 17:
 
 $ jot 5 17
 17
 18
 19
 20
 21
 
 Thus, the script becomes:
 
 $ cat test.sh
 #!/bin/sh
 
 for i in `jot 5 17` ; do echo cancel yes jobid=404${i} ; done
 
 
 $ sh ./test.sh
 cancel yes jobid=40417
 cancel yes jobid=40418
 cancel yes jobid=40419
 cancel yes jobid=40420
 cancel yes jobid=40421


With jot you can shorten this even further:

jot -w cancel yes jobid=404%g 5 17

Again, you might want to zero-pad if you are cancelling, say, jobs 40405 to 
40423:

jot -w cancel yes jobid=404%02g 19 5

Or, better yet, just start from the job range beginning itself:

jot -w cancel yes jobid=%g 19 40405

Cheers,

Paul.


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup is way too slow

2012-10-26 Thread Paul Van Wambeke

 Hi

I had very slow backup times with a Ubuntu server 10.04 running the 
Bacula director, 5.01 and handling a Windows 2008 R2 server with 
Bacula-fd 5.2. I discovered that the throughput of the ethernet adapter 
(broadcom) in the Windows Server was VERY low ..

I solved the problem after some Googling and by changing the settings of 
the ethernet cards : LSO and Jumbo Frames were set to 'LSO Enabled, 
Jumbo off'. I changed the setting to  'Both disabled'. This increased by 
10 the throughput of the card. And I run backups at 50Mbytes/sec today.

No warranty however ! This solved my problem for my configuration.

Hope this helps,

Paul

On 25/10/2012 17:48, noob1321 wrote:
 Hey Guys,
 I have my Director, DB, and Storage on a server running CentOS 6. I am 
 backing up a client over the network that is physically about 15 miles away 
 and is running a Windows 2008 R2 Small Business Server .
 I'm backing up 170 gigabytes. Right now the job has been running for 37 
 minutes and has backed up 394,920,354 bytes or 0.367798 gigabytes.
 It is processing at 181,739 bytes per second. Now that is only .17 MB/s at 
 this pace it is only backing up .6 Gigs per hour which means to back up 
 170Gigs will require nearly  two weeks  to complete. I understand this is a 
 slow process but I mean there has to be something wrong if its at  .17 MB/s!!!

 I do not have spooling enabled. I am backing up to file(my server), I am not 
 using any compression in the FileSet Resource and I am using the MD5 
 signature.

 Please help me out here!!!

 Thanks,
 noob

 +--
 |This was sent by robert97...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 Everyone hates slow websites. So do we.
 Make your web apps faster with AppDynamics
 Download AppDynamics Lite for free today:
 http://p.sf.net/sfu/appdyn_sfd2d_oct
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2012.0.2221 / Base de donnees virale: 2441/5354 - Date: 25/10/2012



-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-10 Thread Paul Van Wambeke

On 6/09/2012 12:02, Paul Van Wambeke wrote:
 On 5/09/2012 11:21, James Harper wrote:
 Using iperf I measured following performances :

 bacula-fd = the windows server 2008 R2 host, 1Gb/sec NIC bacula-dir = a 
 linux
 ubuntu 10.04 PC, 1 100Mb/sec NIC bacula-sd = a linux ubuntu 10.04 server, 1
 Gbite/sec NIC


 iperf serveriperf clientPerformance
 --- 

 bacula-fdbacula-dir10 MBytes/sec
 bacula-fdbacula-sd111 MBytes/sec
 bacula-dirbacula-fd10 MBytes/sec
 bacula-sdbacula-fd 26 MBytes/sec

 So normally the bacula client should be able to write to the bacula storage 
 at
 26MBytes/sec ?

 Any suggestions ?

 Any crappy computer made in the last 5 years should be able to saturate a 
 gigabit link using iperf. The fact that you are only getting 26Mbytes/second 
 fd-sd is a bit worrying... it's well above the 1Mbit/second that bacula 
 appears to be limited to but it's still an indication of a major problem. I 
 haven't had that much experience with Hyper-V for performance testing but it 
 should be able to approach Xen which easily gets gigabit speeds for Windows 
 VMs. Is your switch up to the job?

 James




 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2012.0.2197 / Base de donnees virale: 2437/5249 - Date: 04/09/2012



   Probably we could fine tune the switches, but I still believe the
 main problem resided on the interaction between the bacula-fd and the
 host server's network config ...

We have decided to re-install the windows server, this time
 without enabling the hyper-v role ... will try again and let you know if
 this changes something.

   Kind regards

   Paul

Well, we re-installed the windows servers without hyper-V or any other 
role : no change in transfer rate.

A Windows 7 laptop running bacula-fd 5.2.9 transfers at 35MBytes/sec ...

So it seems to be linked to the fact that the server is a W 2008 R2 
server ...

Am I the only one having this problem, anybody succeeded in backup-up a 
W2008 R2 server with Bacula at correct transfer speeds ?

Kind regards

Paul

-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-10 Thread Paul Van Wambeke


 Hi Julian

I'm happy other user's succeeded in backup of W2008 R2 servers ... Could 
you give me the versions you are using on the Windows client and on the 
Linux director ?

Thanks

Kind regards

Paul

On 10/09/2012 13:45, Fahrer, Julian wrote:

 -Ursprüngliche Nachricht-
 Von: Paul Van Wambeke [mailto:paul.vanwamb...@br.fgov.be]
 Gesendet: Montag, 10. September 2012 13:38
 An: bacula-users@lists.sourceforge.net; James Harper
 Betreff: Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1
 MByte/sec transfer rate, very slow


 On 6/09/2012 12:02, Paul Van Wambeke wrote:
 On 5/09/2012 11:21, James Harper wrote:
 Using iperf I measured following performances :

 bacula-fd = the windows server 2008 R2 host, 1Gb/sec NIC bacula-dir
 = a linux ubuntu 10.04 PC, 1 100Mb/sec NIC bacula-sd = a linux
 ubuntu 10.04 server, 1 Gbite/sec NIC


 iperf serveriperf clientPerformance
 --- 

 bacula-fdbacula-dir10 MBytes/sec
 bacula-fdbacula-sd111 MBytes/sec
 bacula-dirbacula-fd10 MBytes/sec
 bacula-sdbacula-fd 26 MBytes/sec

 So normally the bacula client should be able to write to the bacula
 storage at 26MBytes/sec ?

 Any suggestions ?

 Any crappy computer made in the last 5 years should be able to saturate
 a gigabit link using iperf. The fact that you are only getting
 26Mbytes/second fd-sd is a bit worrying... it's well above the
 1Mbit/second that bacula appears to be limited to but it's still an
 indication of a major problem. I haven't had that much experience with
 Hyper-V for performance testing but it should be able to approach Xen
 which easily gets gigabit speeds for Windows VMs. Is your switch up to the
 job?
 James




 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2012.0.2197 / Base de donnees virale: 2437/5249 - Date:
 04/09/2012



Probably we could fine tune the switches, but I still believe
 the main problem resided on the interaction between the bacula-fd and
 the host server's network config ...

 We have decided to re-install the windows server, this time
 without enabling the hyper-v role ... will try again and let you know
 if this changes something.

Kind regards

Paul

 Well, we re-installed the windows servers without hyper-V or any other
 role : no change in transfer rate.

 A Windows 7 laptop running bacula-fd 5.2.9 transfers at 35MBytes/sec ...
 So it seems to be linked to the fact that the server is a W 2008 R2 server
 ...

 Am I the only one having this problem, anybody succeeded in backup-up a
 W2008 R2 server with Bacula at correct transfer speeds ?

 Kind regards

 Paul
 Hi Paul,

 I'm backing up a lot of 2008 R2 Servers without speed issues. I've seen cases 
 where a faulty nic driver caused such problems. Have you monitored the 
 network interface and watched what happens (with wireshark for example)?
 I did not follow this thread - so this might have been suggested: Compression 
 on the fileset may cause slow transfer rates

 Kind regards

 Julian


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions

 will include endpoint security, mobile security and the latest in malware

 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 -
 Aucun virus trouvé dans ce message.
 Analyse effectuée par AVG - www.avg.fr
 Version: 2012.0.2197 / Base de données virale: 2437/5259 - Date: 09/09/2012




-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-06 Thread Paul Van Wambeke

On 5/09/2012 11:21, James Harper wrote:
 Using iperf I measured following performances :

 bacula-fd = the windows server 2008 R2 host, 1Gb/sec NIC bacula-dir = a linux
 ubuntu 10.04 PC, 1 100Mb/sec NIC bacula-sd = a linux ubuntu 10.04 server, 1
 Gbite/sec NIC


 iperf serveriperf clientPerformance
 --- 

 bacula-fdbacula-dir10 MBytes/sec
 bacula-fdbacula-sd111 MBytes/sec
 bacula-dirbacula-fd10 MBytes/sec
 bacula-sdbacula-fd 26 MBytes/sec

 So normally the bacula client should be able to write to the bacula storage 
 at
 26MBytes/sec ?

 Any suggestions ?

 Any crappy computer made in the last 5 years should be able to saturate a 
 gigabit link using iperf. The fact that you are only getting 26Mbytes/second 
 fd-sd is a bit worrying... it's well above the 1Mbit/second that bacula 
 appears to be limited to but it's still an indication of a major problem. I 
 haven't had that much experience with Hyper-V for performance testing but it 
 should be able to approach Xen which easily gets gigabit speeds for Windows 
 VMs. Is your switch up to the job?

 James




 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2012.0.2197 / Base de donnees virale: 2437/5249 - Date: 04/09/2012




 Probably we could fine tune the switches, but I still believe the 
main problem resided on the interaction between the bacula-fd and the 
host server's network config ...

  We have decided to re-install the windows server, this time 
without enabling the hyper-v role ... will try again and let you know if 
this changes something.

 Kind regards

 Paul

-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-05 Thread Paul Van Wambeke

On 5/09/2012 02:52, James Harper wrote:
   Hi

 I have Bacula 5.01 Director installed on a Linux Ubuntu 10.04 server, and a
 Bacula 5.2.10 Client (Bacula-fd) running on a Windows Server 2008
 R2 SP1 server, with Hyper-V role installed. Purpose is her to backup the host
 server, not the virtual machines.

 Initially the backup transfer rate was extremely slow (kbytes/sec).
 Changing the Network adapter (Broadcom NetXstreme 5714) settings to
 Large end Offload (LSO) = off as suggested in some posts increased the
 transfer rate to 1MB/sec, which is still 10 to 80 times slower than the 
 transfer
 rates I have with other servers running Linux or Windows 7. Transferring a 
 file
 'by hand' over the net runs at 80 MB/sec ...So I suspect a problem with
 Bacula-fd.

 Any idea how to configure the server so that I can get decent backup transfer
 speeds ? I can't imagine these servers can't be managed by Bacula.

 First use something like iperf to make sure that the problem is not bacula. 
 Test all possible combinations of send/receive for the following:

 . host server
 . bacula sd server
 . another pc/server that is separate (can be linux or windows)

 That should give you concrete evidence as to whether the problem is related 
 to bacula. Hyper-V network can be terribly difficult in some cases.

 James




 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2012.0.2197 / Base de donnees virale: 2437/5249 - Date: 04/09/2012




Thanks James for the suggestion.

I have made the performance tests : copying a file from the host server 
to another Windows 7 PC was done at 80MBytes/sec.

Using iperf I measured following performances :

bacula-fd = the windows server 2008 R2 host, 1Gb/sec NIC
bacula-dir = a linux ubuntu 10.04 PC, 1 100Mb/sec NIC
bacula-sd = a linux ubuntu 10.04 server, 1 Gbite/sec NIC


iperf serveriperf clientPerformance
--- 

bacula-fdbacula-dir10 MBytes/sec
bacula-fdbacula-sd111 MBytes/sec
bacula-dirbacula-fd10 MBytes/sec
bacula-sdbacula-fd 26 MBytes/sec

So normally the bacula client should be able to write to the bacula 
storage at 26MBytes/sec ?

Any suggestions ?

Paul

-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-04 Thread Paul Van Wambeke

 Hi

I have Bacula 5.01 Director installed on a Linux Ubuntu 10.04 server, 
and a Bacula 5.2.10 Client (Bacula-fd) running on a Windows Server 2008 
R2 SP1 server, with Hyper-V role installed. Purpose is
her to backup the host server, not the virtual machines.

Initially the backup transfer rate was extremely slow (kbytes/sec). 
Changing the Network adapter (Broadcom NetXstreme 5714) settings to 
Large end Offload (LSO) = off
as suggested in some posts increased the transfer rate to 1MB/sec, which 
is still 10 to 80 times slower than the transfer rates I have with other 
servers running Linux
or Windows 7. Transferring a file 'by hand' over the net runs at 80 
MB/sec ...So I suspect a problem with Bacula-fd.

Any idea how to configure the server so that I can get decent backup 
transfer speeds ? I can't imagine these servers can't be managed by Bacula.

Thanks in advance for your help,

Kind regards

Paul

-- 
Paul VAN WAMBEKE
ICT OpenUp! and GPI Projects
National Botanic Garden of Belgium
Bouchout Domain, Nieuwelaan 38
1860 Meise

Tel: ++32 2 260 09 66
Fax: ++32 2 260 09 45


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-10 Thread Paul Mather
On Feb 10, 2012, at 1:53 AM, Silver Salonen wrote:

 On Thu, 9 Feb 2012 14:58:33 -0500, Paul Mather wrote:
 On Feb 9, 2012, at 2:21 PM, Steven Schlansker wrote:
 On the flip side, compression seems to be a very big win.  I'm 
 seeing ratios from 1.7 to 2.5x savings and the CPU usage is claimed to 
 be relatively cheap.
 
 
 That's what I am seeing, too.  On the fileset I tried to dedup, I'm
 currently seeing a compressratio of 1.51x, which I'm happy with for
 that data.  Enabling ZFS compression appears to have negligible
 overheads, so having turned it on is a big win for me.
 
 I'll ask just in case - you don't have Bacula FD's compression enabled 
 for these filesets which give these compression ratios, do you?


No, this is just using ZFS option compression=gzip-9 on the fileset in the 
pool.

Cheers,

Paul.



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-09 Thread Paul Mather
On Feb 9, 2012, at 2:21 PM, Steven Schlansker wrote:

 
 On Feb 9, 2012, at 11:05 AM, Mark wrote:
 Steven, out of curiosity, do you see any benefit with dedup (assuming that 
 bacula volumes are the only thing on a given zfs volume).  I did some 
 initial trials and it appeared that bacula savesets don't dedup much, if at 
 all, and some searching around pointed to the bacula volume format writing a 
 unique value (was it jobid?) to every block, so no two blocks are ever the 
 same.  I'd backup hundreds of gigs of data and the dedupratio always 
 remained 1.00x.
 
 I didn't do any research, but can confirm that it seems to be useless to turn 
 dedup on.  My pool has always been at 1.00x
 I'm going to turn it off because from what I hear dedup is pretty expensive 
 to run, especially if you don't actually save anything by it.


Some time ago, I enabled dedup on a fileset with ~8 TB of data (about 4 million 
files) on a FreeBSD 8-STABLE system.  Bad move!  The machine has 16 GB of RAM 
but enabling dedup utterly killed it.  I discovered, through further research, 
that dedup requires either a lot of RAM or a read-optimised SSD to hold the 
dedup table (DDT).  Small filesets may work fine, but anything else will 
quickly eat up RAM.  Worse still, the DDT is considered ZFS metadata, and so is 
limited to 25% of the ARC cache, so you need huge amounts of ARC for large DDT 
tables.  I've read that a rule of thumb is that for every 1 TB of data you 
should expect 5 GB of DDT, assuming an average block size of 64 KB.  For large 
sizes, therefore, it's not feasible to store the entire DDT in RAM and thus 
you'd be looking at a low-latency L2ARC solution instead (e.g., SSD).


 On the flip side, compression seems to be a very big win.  I'm seeing ratios 
 from 1.7 to 2.5x savings and the CPU usage is claimed to be relatively cheap.


That's what I am seeing, too.  On the fileset I tried to dedup, I'm currently 
seeing a compressratio of 1.51x, which I'm happy with for that data.  Enabling 
ZFS compression appears to have negligible overheads, so having turned it on is 
a big win for me.

Cheers,

Paul.



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting back to the basics; Volumes, Pools, Reusability

2011-08-05 Thread Paul Mather
On Aug 4, 2011, at 8:46 PM, Joseph L. Casale wrote:

 Why do we use volumes?  It sounds like a silly question, but it's
 genuine.  Is it so a backup can span several media types?  Tape, file,
 disk, Pandora's box, what?  Why do I care about volumes and how long
 they're retained, how often they're pruned, recycled, or what type
 they are?
 
 You have to associate where the data was placed, literally as tapes are
 finite in size and files shouldn't be infinite. It's a way to manage the
 correlation.  Don't think that files are any different than tapes, they
 have much the same limitations except for the obvious like wear.


Well, there are some very important differences between tape and disk that 
directly influence the implementation of things like volume recycling.  
Specifically, if you want Bacula to be somewhat media agnostic, then you have 
to pander to the more restrictive tape media, which makes some decisions seem 
illogical when applied to disk.  With tape, file marks explicitly delimit files 
on the linear tape; random addressing is difficult, though appending is 
natural.  This explains why volumes are recycled wholly and not partially.  For 
many, this doesn't make sense for the more flexible disk media, which doesn't 
require its blocks to be written contiguously.  But, for an abstracted storage 
model, it makes sense to let the lowest common denominator dictate to more 
flexible media types.

Many Bacula concepts make a lot more sense if you think in terms of managing 
tape.

Cheers,

Paul.



--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reliable Backups without Tapes?

2011-07-15 Thread Paul Mather
On Jul 14, 2011, at 7:56 PM, Ken Mandelberg wrote:

 Under Legato the license restriction artificially keep the file-device 
 small relative to the tape storage. However, these days disks are 
 cheaper than tapes and license free we could afford a lot of disk space.

I know hard drives are cheap these days, but I didn't realise they were cheaper 
than tape nowadays.  LTO-4 media works out at less than 5 cents per GB 
(uncompressed) last time I bought it, and I believe LTO-5 is less than 4 cents 
per GB (again, uncompressed).

But, don't underestimate some of the perhaps-neglected advantages of tape:

- Easier to offsite for long-term archiving and disaster recovery
- WORM capability for regulatory compliance where needed
- Easier to expand total capacity---just buy more media
- Increasing capacity doesn't continually eat up rack space and drive up power 
consumption

Personally, I like the fact that Bacula supports a mixed disk/tape solution, 
allowing for disk to provide faster near-line access to more recent backups 
(e.g., incrementals) and tape for older material.

Cheers,

Paul.


--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Invalid Tape position - Marking tapes with error

2011-07-07 Thread Paul Mather
On Jul 7, 2011, at 2:13 PM, Martin Simmons wrote:

 On Thu, 7 Jul 2011 10:30:36 -0600, James Woodward said:
 
 Hello,
 
 I haven't seen anything about using nsa devices to use over something like
 an sa device. I did a bit of a search but haven't seen anything that really
 explains that portion to me.
 
 http://www.bacula.org/5.0.x-manuals/en/main/main/Storage_Daemon_Configuratio.html#SECTION00203
 http://www.bacula.org/5.0.x-manuals/en/problems/problems/Testing_Your_Tape_Drive.html#SECTION00413000
 
 Also, all of the examples use non-rewind devices.
 
 
 When I set these up I did see a recommendation to use the pass through
 devices e.g pass0, pass1, pass2, pass3. I don't remember the specifics but
 the passthrough devices themselves seemed to be problematic.
 
 Yes, the passthrough devices shouldn't be used for data transfer.  They should
 be used to control an autochanger robot.


Normally, under FreeBSD, you will have a ch device for your autoloader, which 
you should use instead.  E.g.,

tape# camcontrol devlist
QUANTUM ULTRIUM 4 2210   at scbus0 target 8 lun 0 (sa0,pass0)
QUANTUM UHDL 0075at scbus0 target 8 lun 1 (ch0,pass1)
[[...]]


The chio command uses /dev/ch0 by default.

Cheers,

Paul.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Invalid Tape position - Marking tapes with error

2011-07-07 Thread Paul Mather
On Jul 7, 2011, at 12:30 PM, James Woodward wrote:

 Hello,
 
 I haven't seen anything about using nsa devices to use over something like an 
 sa device. I did a bit of a search but haven't seen anything that really 
 explains that portion to me.


The FreeBSD tape driver uses different device names as shortcuts for the 
various different action-on-close semantics.  The most common are /dev/saN, 
which will rewind the tape when the device is closed, and /dev/nsaN, which does 
not rewind the tape when the device is closed.  (Think n for non-rewinding.)  
If you want multiple files on a single tape, you will usually want to use 
/dev/nsaN.

See man sa(4) for details.

Cheers,

Paul.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mount request?

2011-06-24 Thread Paul Fontenot
No, I made some Pool changes. Before it would load tapes when it needed them
and now it won't. If I mount the tape it will run the backups but when the
schedule calls for a tape from a different pool it waits for me to do the
unload and load of media. That just seems like I borked something in the
config.

On Thu, Jun 23, 2011 at 1:40 PM, John Drescher dresche...@gmail.com wrote:

 2011/6/23 Paul Fontenot wpfonte...@gmail.com:
  I have an autochanger than passes all the tests but waits for me to load
  mount the tapes. I'm certain I've over looked something fairly obvious
 and
  would appreciate any pointers. I've got a USB drive working as an
  autochanger with autofs so I imagine a real one shouldn;t need my help to
 do
  it'd job.
 

 Did you recently use the umount command?

 John

--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Mount request?

2011-06-23 Thread Paul Fontenot
I have an autochanger than passes all the tests but waits for me to load
mount the tapes. I'm certain I've over looked something fairly obvious and
would appreciate any pointers. I've got a USB drive working as an
autochanger with autofs so I imagine a real one shouldn;t need my help to do
it'd job.

Thanks
--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Client machine browsing only it's backups

2011-05-13 Thread Paul Pathiakis
Hi,

I'm new on the list and I don't know if this question has been addressed.

I'm looking to provide an offsite backup service to my clients and I'd like to 
use Bacula as the product. (I love it for enterprise backups.)

I have clients that would like, as part of their EDR, to send backups offsite 
to my company.  This is easily done by installing a bacula-fd client.

However, I don't want to incur the overhead of a backup operator/administrator 
to service these requests.  I'd like the customer/client to be able to do 
this.  My question is whether I can configure bacula BAT or the Web interface 
to allow the client to only see his site's backups?  I don't want all customers 
to see all other customers' backups.

Is there a way to do this?

Thank you,

Paul
--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 11:01 AM, John Drescher wrote:

 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO drives.
 
 John
 Not really an option for file/disk devices though.
 
 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..
 
 
 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.

I've been using ZFS with a compression-enabled fileset for a while now under 
FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
great compression ratios for my backup data: 1.09x.  I am using the 
speed-oriented compression algorithm on this fileset, though, because the 
hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
better compression if I enabled one of the GZIP levels.

Cheers,

Paul.



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 12:44 PM, Dan Langille wrote:

 
 On Thu, January 20, 2011 12:28 pm, Silver Salonen wrote:
 On Thursday 20 January 2011 19:02:33 Paul Mather wrote:
 On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
 
 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO
 drives.
 
 John
 Not really an option for file/disk devices though.
 
 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance
 compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..
 
 
 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.
 
 I've been using ZFS with a compression-enabled fileset for a while now
 under FreeBSD.  It is transparent and reliable.  Looking just now, I'm
 not getting great compression ratios for my backup data: 1.09x.  I am
 using the speed-oriented compression algorithm on this fileset, though,
 because the hardware is relatively puny.  (It is a Bacula test bed.)
 Probably I'd get better compression if I enabled one of the GZIP levels.
 
 Isn't the low compression ratio because of bacula volume format that
 messes up data in FS point of view? The same thing that is a problem in
 implementing (or using an FS-based) deduplication in Bacula.
 
 I also use ZFS on FreeBSD.  Perhaps the above is a typo.  I get nearly 2.0
 compression ratio.
 
 $ zfs get et compressratio
 NAME  PROPERTY   VALUE  SOURCE
 storage   compressratio  1.89x  -
 storage/compressedcompressratio  1.90x  -
 storage/compressed/bacula compressratio  1.90x  -
 storage/compressed/bacula@2010.10.19  compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20  compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20a compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20b compressratio  1.91x  -
 storage/compressed/bac...@pre.pool.merge  compressratio  1.94x  -
 storage/compressed/home   compressratio  1.00x  -
 storage/pgsql compressratio  1.00x  -


Nope, not a typo:

backup# zfs get compressratio
NAME  PROPERTY   VALUE  SOURCE
backups   compressratio  1.07x  -
backups/baculacompressratio  1.09x  -
backups/hosts compressratio  1.46x  -
backups/san   compressratio  1.06x  -
backups/san@filedrop  compressratio  1.06x  -


The backups/bacula fileset is where my Bacula volumes are stored.  As I 
surmised, I get better compression ratios under GZIP-9 compression:

backup# zfs get compression
NAME  PROPERTY VALUE SOURCE
backups   compression  off   default
backups/baculacompression  onlocal
backups/hosts compression  gzip-9local
backups/san   compression  onlocal
backups/san@filedrop  compression  - -

(Compression=on equates to lzjb, which is the most lightweight method, CPU 
resources wise, but not the best in terms of compression ratio achieved.)

I will probably switch the other filesets to GZIP compression, as ZFS 
performance has improved significantly under RELENG_8...

Cheers,

Paul.




--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backups too large

2011-01-13 Thread Paul Mather
On Jan 13, 2011, at 3:44 PM, Lawrence Strydom wrote:

 I understand that something is adding data and logically the backup should 
 grow. What I don't understand is why the entire file has to be backed up if 
 only a few bytes of data has changed. It is mainly outlook.pst files and 
 MSSQL databse files that cuase these large backups. Some of these files are 
 several GB. 


Because Bacula is file-based (as are most other backup systems), and not, say, 
block-based (like, e.g., Norton Ghost), many messages in a single file 
mailbox formats like PST and mbox will tend to play havoc with backups, because 
even adding a single message to your mailbox will cause the whole mailbox to be 
backed up (i.e., the new message plus all the old messages in there).  That's 
one reason why Apple changed over to maildir-like message storage in their mail 
client when they introduced their Time Machine backup system---it is much 
friendlier to backups, as a new message only causes the file containing the new 
message to be backed up in a subsequent incremental backup.

You'll only get close to the sort of behaviour you want (i.e., only the changed 
data in the file is backed up) if and when Bacula gains some measure of 
deduplication support.  (Maybe not even then, depending upon how it decides to 
do it.)


 My understanding of an incremental backup is that only changed data is backed 
 up. It seems that at the moment my Bacula is doing differential backups, ie 
 backing up the entire file if the timestamp has changed, even though I have 
 configured it for incremental.


If the last modified timestamp changes since the previous incremental backup 
then Bacula will assume the file has changed and include it in the 
incremental---even if the file data has not changed.

You can enable Accurate backups and check other attributes (such as MD5 
checksums of the file) if you want Bacula to take more care in only backing up 
files that have truly changed.  This will slow down the backup speed, though.

Cheers,

Paul.


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SLES9 x64 ONLY intermittent bsock error and fails job. 50% of the time the jobs work all other clients work fine

2011-01-04 Thread Paul Hanson
Hi,

I have an installation that was previously using version 2.4.4 and was upgraded 
to 5.0.3 with good success. However, there was a previous problem with the 
SLES9 x64 clients that would intermittently fail the jobs due to bsock errors. 
So the error has carried forward from 2.4.4 to 5.0.3 and has not fixed the 
fault by upgrading to the latest. The errors would occur with a little as a few 
MB through to a couple of 100MB. The jobs could be full or incremental using 
either tape or disk pool. More often the backup will fail with only 40-50 MB 
backed up - so the job starts but fails quite quickly but waits for network 
timeout values to exceed to report the failure and cancelling the job.

The client base is all version 5.0.3 as is the SD and DIR services. We have 
approximately 80 clients where 90 % are SLES10 (x64 and a couple of 32 bit 
versions) and they work without error, 8% are Windows 2003 and work without 
error but the two SLES9 x64 builds have BOTH exhibited this intermittent 
network timeout issue. If I run a manual backup (instead of scheduled) during a 
non-maintenance window then the same client backs up correctly without error 
(tape and disk pool).

It has ONLY been the two SLES9 x64 platforms that have errored this way, all 
other clients do NOT error at all. I have checked default TCP timeout values 
which are all 7200 seconds but my feeling is that this is NOT the fault. This 
maybe a threading issue or a concurrency issue specific to Bacula (either 2.4.4 
or 5.0.3) with SLES9 x64 or this could even be a MTU issue but this doesn't 
explain why it works 50% of the time. The majority of the machines are virtual 
and as such share the physical hosts so networking shouldn't be an issue per-se.

I would appreciate any opinions or experience with this type of error as this 
is proving to be difficult to repeat manually.

Errors...

Error: bsock.c:393 Write error sending 65536 bytes to Storage 
daemon:mybaculaserver.mylocaldomain:9103: ERR=Broken pipe

Fatal error: backup.c:1024 Network send error to SD. ERR=Broken pipe




***Disclaimer***

This email and any attachments may contain confidential and/or privileged 
material; it is for the intended addressee(s) only. If you are not a named 
addressee, you must not use, retain or disclose such information. 

The Waterdale Group (3477263), comprising of CPiO Limited (2488682), Ardent 
Solutions Limited (4807328), eSpida Limited (4021203), Advanced Digital 
Technology Limited (1750478) and Intellisell (6070355) whilst taking all 
reasonable precautions against email or attachment viruses, cannot guarantee 
the safety of this email.

The views expressed in this email are those of the originator and do not 
necessarily represent the views of The Waterdale Group

Nothing in this email shall bind The Waterdale Group in any contract or 
obligation. A copy of the Waterdale Group of Companies'Conditions of Sale can 
be downloaded at www.waterdalegroup.co.uk or on any individual group member 
site.

For further information visit www.waterdalegroup.co.uk 
--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-30 Thread Paul Mather
On Aug 30, 2010, at 6:41 AM, Henrik Johansen wrote:

 Like most ZFS related stuff it all sounds (and looks) extremely easy but
 in reality it is not quite so simple.

Yes, but does ZFS makes things easier or harder?

Silent data corruption won't go away just because your pool is large. :-)

(But, this is all getting a bit off-topic for Bacula-users.)

Cheers,

Paul.


--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-28 Thread Paul Mather
On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:

 Could be due to a transient error (transmission or wild/torn read at time of 
 calculation).   I see this a lot with integrity checking of files here (50TiB 
 of storage).
 
 Only way to get around this now is to do a known-good sha1/md5 hash of data 
 (2-3 reads of the file make sure that they all match and that the file is not 
 corrupted) save that as a baseline and then when doing reads/compares if one 
 fails do another re-read and see if the first one was in error and compare 
 that with your baseline. This is one  reason why I'm switching to the new 
 generation of sas drives that have ioecc checks on READS not just writes to 
 help cut down on some of this.
 
 Corruption does occur as well and is more probable with the higher the 
 capacity of the drive. Ideally you would have a drive that would  do 
 ioecc on reads, plus using T10 PI extensions (DIX/DIF) from drive to 
 controller up to your file system layer.It won't always prevent it by 
 itself but would allow if you have a raid setup to do some self-healing when 
 a drive reports a non transient (i.e. corrupted sector of data).   
 
 However the T10 PI extensions are only on sas/fc drives (520/528 byte blocks) 
 and so far as I can tell only the new LSI hba's support a small subset of 
 this (no hardware raid controllers I can find) and have not seen any support 
 up to the OS/filesystem level.SATA is not included at all as the T13 
 group opted not to include it in the spec.

You could also stick with your current hardware and use a file system that 
emphasises end-to-end data integrity like ZFS.  ZFS checksums at many levels, 
and has a don't trust the hardware mentality.  It can detect silent data 
corruption and automatically self-heal where redundancy permits.

ZFS also supports pool scrubbing---akin to the patrol reading of many RAID 
controllers---for proactive detection of silent data corruption.  With drive 
capacities becoming very large, the probability of an unrecoverable read 
becomes very high.  This becomes very significant even in redundant storage 
systems because a drive failure necessitates a lengthy rebuild period during 
which the storage array lacks any redundancy (in the case of RAID-5).  It is 
for this reason that RAID-6 (ZFS raidz2) is becoming de rigeur for 
many-terabyte arrays using large drives, and, specifically, the reason ZFS 
garnered its triple-parity raidz3 pool type (in ZFS pool version 17).

I believe Btrfs intends to bring many ZFS features to Linux.

Cheers,

Paul.--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or Software compression?

2010-08-14 Thread Paul Mather
On Aug 13, 2010, at 2:23 PM, Dietz Pröpper wrote:

 You:
 On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:
 IMHO there are two problems with hardware compression:
 1. Data mix: The compression algorithms tend to work quite well on
 compressable stuff, but can't cope very well with precompressed stuff,
 i.e. encrypted data or media files. On an old DLT drive (but modern
 hardware should perform in a similar fashion), I get around 7MB/s
 with normal data and around 3MB/s with precrompessed stuff. The raw
 tape write rate is somewhere around 4MB/s. And even worse - due to
 the fact that the compression blurs precompressed data, it also takes
 noticeable more tape space.
 
 Those problems affect software compression, too.
 
 Hmm, I can't reproduce them with gzip on the data in question.

Maybe, then, your job is not I/O bound?

If backup is limited by the speed at which you can write to tape then it 
logically follows that you will get the observed behaviour you mention above.  
More compressible source data will lead to faster backup rates because those 
data will compress to less bits that need to be written to tape.  Conversely, 
if the compression algorithm does not take steps to guard against growth due to 
compressed input, backup speeds will fall below nominal write speeds with such 
data because the source data will result in more bits to be written, which will 
take longer (relative to the source).

If you are not getting something akin to this observed behaviour then your 
backup is not being limited by tape write speed, but by something else such as 
source input speed or compression speed.

Cheers,

Paul.


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or Software compression?

2010-08-13 Thread Paul Mather
On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:

 IMHO there are two problems with hardware compression:
 1. Data mix: The compression algorithms tend to work quite well on 
 compressable stuff, but can't cope very well with precompressed stuff, i.e. 
 encrypted data or media files. On an old DLT drive (but modern hardware 
 should perform in a similar fashion), I get around 7MB/s with normal data 
 and around 3MB/s with precrompessed stuff. The raw tape write rate is 
 somewhere around 4MB/s. And even worse - due to the fact that the 
 compression blurs precompressed data, it also takes noticeable more tape 
 space.

Those problems affect software compression, too.

LTO takes steps to ameliorate the effects of pre-compressed/high entropy input 
data by allowing an output block to be prefixed as being uncompressed.  So, if 
input data would cause a block to grow due to compression, it can output the 
original input itself, with the block prefixed, meaning only a very tiny 
percentage increase in tape usage for stretches of high-entropy input.  
Software compression also takes steps to limit growth in output due to 
highly-compressed input.

 2. Vendors: I've seen it more than once that tape vendors managed to break 
 their own compression, which means that a replacement tape drive two years 
 younger than it's predecessor can no longer read the compressed tape. 
 Compatibility between vendors, the same.
 So, if the compression algorithm is not defined in the tape drive's 
 standard then it's no good idea to even think about using the tape's 
 hardware compression.

I agree with point 2, however I believe the trend has been to move towards 
using algorithms defined and documented in published standards for the very 
reasons you state.

Cheers,

Paul.


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-09 Thread Paul Mather
On Aug 9, 2010, at 2:55 AM, Henry Yen wrote:

 On Fri, Aug 06, 2010 at 10:48:10AM +0200, Christian Gaul wrote:
 Even when catting to /dev/dsp i use /dev/urandom.. Blocking on
 /dev/random happens much too quickly.. and when do you really need that
 much randomness.
 
 I get about 40 bytes on a small server before blocking.

On Linux, /dev/random will block if there is insufficient entropy in the pool.  
Unlike /dev/random, /dev/urandom will not block on Linux, but will reuse 
entropy in the pool.  Thus, /dev/random produces higher quality, lower quantity 
random data than /dev/urandom.  For the purposes of compressibility tests, the 
pseudorandom data of /dev/urandom is perfectly fine.  The /dev/random device is 
better used, e.g., for generating cryptographic keys.

 
 Reason 1: the example I gave yields a file size for tempchunk of 512MB,
 not 1MB, as given in your counter-example.  I agree that (at least 
 now-a-days)
 catting 1MB chunks into a 6MB chunk is likely (although not assured)
 to lead to greatly reduced size during later compression, but I disagree
 that catting 512MB chunks into a 3GB chunk is likely to be compressible
 by any general-purpose compressor.
 
 Which is what i meant with way bigger than the library size of the
 algorithm.  Mostly my Information was pitfalls to look out for when
 testing the speed of your equipment, if you went ahead and cat-ted 3000
 x 1MB, i believe the hardware compression would make something highly
 compressed out of it.
 My guess is it would work for most chunks around half as large as the
 buffer size of the drive (totally guessing).
 
 I think that the tape drive manufacturers don't make large buffer/CPU
 capacity in their drives yet.  I finally did a test on an SDLT2 (160GB)
 drive; admittedly, it's fairly old as tape drives go, but tape technology
 appears to be rather a bit slower than disk technology, at least as far
 as raw capacity is concerned.  I created two files from /dev/urandom;
 one was 1GB, the other a mere 10K.  I then created two identically-sized
 files corresponding to each of these two chunks (4 of the first and approx.
 400k of the second).  Writing them to the SDLT2 drive using 60k blocksize,
 with compression on, yielded uncanny results: the writable capacity before
 hitting EOT was within 0.01%, and the elapsed time was within 0.02%.

As I posted here recently, even modern LTO tape drives use only a 1 KB (1024 
byte) history buffer for its sliding window-based compression algorithm.  So, 
any repeated random chunk greater than 1 KB in size will be incompressible by 
LTO tape drives.

 I see there's a reason to almost completely ignore the so-called compressed
 capacity claims by tape drive manufacturers...

By definition, random data are not compressible.  It's my understanding that 
the compressed capacity of tapes is based explicitly on an expected 2:1 
compression ratio for source data (and this is usually cited somewhere in the 
small print).  That is a reasonable estimate for text.  Other data may compress 
better or worse.  Already-compressed or encrypted data will be incompressible 
to the tape drive.  In other words, compressed capacity is heavily dependent 
on your source data.

Cheers,

Paul.
--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-06 Thread Paul Mather
On Aug 6, 2010, at 4:48 AM, Christian Gaul wrote:

 Am 05.08.2010 21:56, schrieb Henry Yen:
 On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
 
[[...]]
 
 /dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
 a large uncompressible file could be done sort of like:
 
   dd if=/dev/urandom of=tempchunk count=1048576
   cat tempchunk tempchunk tempchunk tempchunk tempchunk tempchunk  bigfile
 
 
 cat-ting random data a couple of times to make one big random file wont
 really work, unless the size of the chunks is way bigger than the
 library size of the compression algorithm.
 
 Reason 1: the example I gave yields a file size for tempchunk of 512MB,
 not 1MB, as given in your counter-example.  I agree that (at least 
 now-a-days)
 catting 1MB chunks into a 6MB chunk is likely (although not assured)
 to lead to greatly reduced size during later compression, but I disagree
 that catting 512MB chunks into a 3GB chunk is likely to be compressible
 by any general-purpose compressor.
 
 
 Which is what i meant with way bigger than the library size of the
 algorithm.  Mostly my Information was pitfalls to look out for when
 testing the speed of your equipment, if you went ahead and cat-ted 3000
 x 1MB, i believe the hardware compression would make something highly
 compressed out of it.
 My guess is it would work for most chunks around half as large as the
 buffer size of the drive (totally guessing).

The hardware compression standard used by LTO drives specifies a buffer size of 
1K (1024 bytes).  See 
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-321.pdf, 
Section 7.3, page 4.

A 1 MB chunk is even quite large for many current compression algorithms 
distributed with common operating systems.  The man page for gzip would seem to 
list the buffer size used by that algorithm as 32 KB.  The buffer size used by 
bzip2 can vary between 100,000 and 900,000 bytes (corresponding to compression 
settings -1 to -9).  The recent LZMA .xz format compressor would appear to vary 
maximum memory usage between 6 MB and 800 MB corresponding to the -0 to -9 
compression presets.  (The LZMA memory requirements are adjusted downwards to 
accommodate a percentage of available RAM where necessary.)

Cheers,

Paul.
--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula support in the atlanta area?

2010-07-28 Thread Paul Chason
Does anyone know of any Bacula consultants in or around Atlanta, GA? Thanks
in advance for your help!

 

Thanks,

 

Paul Chason

Managing Partner

___

office 678 580 1690 x202

cell 404 432 2433

fax 770 360 5776

 http://www.mediacurrent.com/ mediacurrent

INTERACTIVE SOLUTIONS

 

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.0.2 FreeBSD port fails to build during upgrade

2010-07-20 Thread Paul Mather
I'm running FreeBSD 8.1-PRERELEASE (RELENG_8).  Recently, the 
sysutils/bacula-{client,server} ports were updated to 5.0.2.  Unfortunately, 
when updating via portmaster, the bacula-client port updated successfully, but 
bacula-server did not.  It fails to build:

[[...]]
Compiling ua_restore.c
Compiling ua_run.c
Compiling ua_select.c
Compiling ua_server.c
Compiling ua_status.c
Compiling ua_tree.c
Compiling ua_update.c
Compiling vbackup.c
Compiling verify.c
Linking bacula-dir ...
/usr/ports/sysutils/bacula-server/work/bacula-5.0.2/libtool --silent --tag=CXX 
--mode=link /usr/bin/c++  -L/usr/local/lib -L../lib -L../cats -L../findlib -o 
bacula-dir dird.o admin.o authenticate.o autoprune.o backup.o bsr.o catreq.o 
dir_plugins.o dird_conf.o expand.o fd_cmds.o getmsg.o inc_conf.o job.o jobq.o 
migrate.o mountreq.o msgchan.o next_vol.o newvol.o pythondir.o recycle.o 
restore.o run_conf.o scheduler.o ua_acl.o ua_cmds.o ua_dotcmds.o ua_query.o 
ua_input.o ua_label.o ua_output.o ua_prune.o ua_purge.o ua_restore.o ua_run.o 
ua_select.o ua_server.o ua_status.o ua_tree.o ua_update.o vbackup.o verify.o  
-lbacfind -lbacsql -lbacpy -lbaccfg -lbac -lm   -L/usr/local/lib -lpq -lcrypt 
-lpthread  -lintl  -lwrap /usr/local/lib/libintl.so /usr/local/lib/libiconv.so 
-Wl,-rpath -Wl,/usr/local/lib -lssl -lcrypto
/usr/local/lib/libbacsql.so: undefined reference to 
`rwl_writelock(s_rwlock_tag*)'
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird.


  == Error in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird 
==


*** Error code 1

Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2.
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server.
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server.


It looks to me that the linking step above is wrong: it is picking up the old 
version of the library installed in /usr/local/lib by sysutils/bacula-server 
5.0.0_1.  It shouldn't be including -L/usr/local/lib in the invocation of 
libtool.

Anyone who builds the port from scratch will not have a problem, but anyone 
updating via portmaster or portupgrade will run into the problems above.

Cheers,

Paul.


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.2 FreeBSD port fails to build during upgrade

2010-07-20 Thread Paul Mather
On Jul 20, 2010, at 3:10 PM, Dan Langille wrote:

 On 7/20/2010 12:20 PM, Paul Mather wrote:
 I'm running FreeBSD 8.1-PRERELEASE (RELENG_8).  Recently, the 
 sysutils/bacula-{client,server} ports were updated to 5.0.2.  Unfortunately, 
 when updating via portmaster, the bacula-client port updated successfully, 
 but bacula-server did not.  It fails to build:
 
 [[...]]
 Compiling ua_restore.c
 Compiling ua_run.c
 Compiling ua_select.c
 Compiling ua_server.c
 Compiling ua_status.c
 Compiling ua_tree.c
 Compiling ua_update.c
 Compiling vbackup.c
 Compiling verify.c
 Linking bacula-dir ...
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/libtool --silent 
 --tag=CXX --mode=link /usr/bin/c++  -L/usr/local/lib -L../lib -L../cats 
 -L../findlib -o bacula-dir dird.o admin.o authenticate.o autoprune.o 
 backup.o bsr.o catreq.o dir_plugins.o dird_conf.o expand.o fd_cmds.o 
 getmsg.o inc_conf.o job.o jobq.o migrate.o mountreq.o msgchan.o next_vol.o 
 newvol.o pythondir.o recycle.o restore.o run_conf.o scheduler.o ua_acl.o 
 ua_cmds.o ua_dotcmds.o ua_query.o ua_input.o ua_label.o ua_output.o 
 ua_prune.o ua_purge.o ua_restore.o ua_run.o ua_select.o ua_server.o 
 ua_status.o ua_tree.o ua_update.o vbackup.o verify.o  -lbacfind -lbacsql 
 -lbacpy -lbaccfg -lbac -lm   -L/usr/local/lib -lpq -lcrypt -lpthread  -lintl 
  -lwrap /usr/local/lib/libintl.so /usr/local/lib/libiconv.so -Wl,-rpath 
 -Wl,/usr/local/lib -lssl -lcrypto
 /usr/local/lib/libbacsql.so: undefined reference to 
 `rwl_writelock(s_rwlock_tag*)'
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird.
 
 
   == Error in 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird ==
 
 
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2.
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server.
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server.
 
 
 It looks to me that the linking step above is wrong: it is picking up the 
 old version of the library installed in /usr/local/lib by 
 sysutils/bacula-server 5.0.0_1.  It shouldn't be including 
 -L/usr/local/lib in the invocation of libtool.
 
 Anyone who builds the port from scratch will not have a problem, but anyone 
 updating via portmaster or portupgrade will run into the problems above.
 
 Agreed.  I heard about this yesterday, but have not had time to fix it.
 
 We're also going to change the port to default to PostgreSQL instead of 
 SQLite.
 
 Sorry you encountered the problem.

No problems, as the workaround was simple and I wanted to give folks a 
heads-up.  (I guess I should have been more explicit, but the workaround is 
simply to pkg_delete the bacula-server port and reinstall it, rather than 
trying to upgrade via portmaster/portupgrade.  Deleting the port won't remove 
any local configuration files, for those who might be worried.)

Good to hear that PostgreSQL will become the default back-end database.  Nice 
work!

Cheers,

Paul.



--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula error: ERR=Interrupted system call

2010-07-01 Thread Paul Binkley
Hi,

 

Running Bacula 3.0.2 on CentOS 5.4. Getting an error I haven't encountered
before backing up a Windows 7 client.

 

From the director:

bconsole status client

.

Connecting to Client xx-fd at 10.xx.xx.xx:9102

 

xx-fd Version: 3.0.3 (18 October 2009)  VSS Linux Cross-compile Win32

Daemon started 28-Jun-10 07:56, 0 Jobs run since started.

Heap: heap=0 smbytes=10,874 max_bytes=10,996 bufs=53 max_bufs=54

Sizeof: boffset_t=8 size_t=4 debug=0 trace=1

 

Running Jobs:

Director connected at: 01-Jul-10 08:45

No Jobs running.



 

Terminated Jobs:

JobId  LevelFiles  Bytes   Status   FinishedName 

==

  2112  Incr 13,2206.543 G  OK   16-Jun-10 10:11 MISWeeklyBackup



*

 

The terminated job was run manually and was fine, but the scheduled job
fails with the following:

01-Jul 02:31 bacula-dir JobId 2214: Start Backup JobId 2214,
Job=MISWeeklyBackup.2010-07-01_01.05.00_09

01-Jul 02:31 bacula-dir JobId 2214: Using Device FileStorage

01-Jul 02:34 bacula-dir JobId 2214: Fatal error: No Job status returned from
FD.

01-Jul 02:34 bacula-dir JobId 2214: Fatal error: bsock.c:135 Unable to
connect to Client: xx-fd on 10.xx.xx.xx:9102. ERR=Interrupted system
call 01-Jul 02:34 bacula-dir JobId 2214: Error: Bacula bacula-dir 3.0.2
(18Jul09): 01-Jul-2010 02:34:22

  Build OS:   x86_64-redhat-linux-gnu redhat 

  JobId:  2214

  Job:MISWeeklyBackup.2010-07-01_01.05.00_09

  Backup Level:   Incremental, since=2010-06-16 09:51:41

  Client:  xx-fd  3.0.3 (18Oct09)
Linux,Cross-compile,Win32

  FileSet:MIS FileSet 2010-03-10 01:05:00

  Pool:   Weekly (From Job resource)

  Catalog:MyCatalog (From Client resource)

  Storage:FileStorage (From Job resource)

  Scheduled time: 01-Jul-2010 01:05:00

  Start time: 01-Jul-2010 02:31:12

  End time:   01-Jul-2010 02:34:22

  Elapsed time:   3 mins 10 secs

  Priority:   10

  FD Files Written:   0

  SD Files Written:   0

  FD Bytes Written:   0 (0 B)

  SD Bytes Written:   0 (0 B)

  Rate:   0.0 KB/s

  Software Compression:   None

  VSS:no

  Encryption: no

  Accurate:   no

  Volume name(s): 

  Volume Session Id:  241

  Volume Session Time:1275585098

  Last Volume Bytes:  7,475,074,671 (7.475 GB)

  Non-fatal FD errors:0

  SD Errors:  0

  FD termination status:  Error

  SD termination status:  Waiting on FD

  Termination:*** Backup Error ***

 

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bconsole not properly installed

2010-06-30 Thread Paul Mather
On Jun 30, 2010, at 9:13 AM, Albin Vega wrote:

 Hello
  
 First let me say that I havent been using FreeBSD and Bacula before, so its 
 all a bit new to me, and I might do some beginners mistakes.
  
  have installed Bacula server 5.0.0.1 on a FreeBsd 8 platform. Have followed 
 the instructions on http://www.freebsddiary.org/bacula.php and have run all 
 the scripts and have the bacula pids running. But now I have run into some 
 trouble. When i try to start the bconsole on the terminal window I get the 
 message Command not found.

Be careful about following that guide: it is somewhat out of date.  (For 
example, the sysutils/bacula port no longer exists, as I'm sure you discovered.)

 I then open Gnome and run the bconsole command in a treminal window. I get 
 the message Bconsole not properly installed
  
 I have located bconsole file in to places:
  
 1.
 /usr/local/share/bacula/bconsole
  
 This script looks like this:
 bacupserver# more /usr/local/share/bacula/bconsole
 #!/bin/sh
 which dirname /dev/null
 # does dirname exit?
 if [ $? = 0 ] ; then
   cwd=`dirname $0`
   if [ x$cwd = x. ]; then
  cwd=`pwd`
   fi
   if [ x$cwd = x/usr/local/sbin ] ; then
  echo bconsole not properly installed.
  exit 1
   fi
 fi
 if [ x/usr/local/sbin = x/usr/local/etc ]; then
echo bconsole not properly installed.
exit 1
 fi
 if [ $# = 1 ] ; then
echo doing bconsole $1.conf
/usr/local/sbin/bconsole -c $1.conf
 else
/usr/local/sbin/bconsole -c /usr/local/etc/bconsole.conf
 fi
 Running this script returns message:
 bconsole not properly installed.
  
 2. 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole
  
 This script looks like this:
 bacupserver# more 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole
 #!/bin/sh
 which dirname /dev/null
 # does dirname exit?
 if [ $? = 0 ] ; then
   cwd=`dirname $0`
   if [ x$cwd = x. ]; then
  cwd=`pwd`
   fi
   if [ x$cwd = x/sbin ] ; then
  echo bconsole not properly installed.
  exit 1
   fi
 fi
 if [ x/sbin = x/etc/bacula ]; then
echo bconsole not properly installed.
exit 1
 fi
 if [ $# = 1 ] ; then
echo doing bconsole $1.conf
/sbin/bconsole -c $1.conf
 else
/sbin/bconsole -c /etc/bacula/bconsole.conf
 fi
  
 Runnig this script returns the message:
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole: 
 /sbin/bconsole: not found
  
  
 I then located the bconsole.conf file:
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/src/console/bconsole.conf
 Tryed to manually move til to /etc/bacula but there is no /etc/bacula 
 directory... 
  
 I am running out of ideas on what to do here. Enybody have any ideas on what 
 to do? Would be very greatful I someone would point me in the right 
 direction

Did you install the sysutils/bacula-server port?  (I.e., did you do a make 
install in the /usr/ports/sysutils/bacula-server directory to build and 
install it?)  If you did, it should have installed bconsole for you under 
/usr/local.  In my case, it shows it to be installed under /usr/local/sbin:

backup# which bconsole
/usr/local/sbin/bconsole
backup# file /usr/local/sbin/bconsole
/usr/local/sbin/bconsole: ELF 32-bit LSB executable, Intel 80386, version 1 
(FreeBSD), dynamically linked (uses shared libs), for FreeBSD 8.0 (800505), not 
stripped

You shouldn't be trying to run the bconsole shell script; run 
/usr/local/sbin/bconsole directly.

Under FreeBSD, ports avoid putting configuration directories in the base 
operating system directories, to avoid polluting the base system.  The ports 
system tries to keep them separate, usually under /usr/local.  So, Bacula 
installed from ports on FreeBSD will not use /etc/bacula, it uses 
/usr/local/etc.  The bconsole installed knows to look in /usr/local/etc for 
configuration files.

Note, when you installed the Bacula server port, there will probably have been 
some messages about adding entries to /etc/rc.conf to ensure the Bacula daemons 
are run at system startup.  Also, there will have been information indicating 
that the configuration files for the daemons will have been installed in 
/usr/local/etc and will need to be edited to suit your local setup.

If you want to know what files were installed by the Bacula server port, you 
can use the following command: pkg_info -L bacula-server-5.0.0_1

Finally, Dan Langille is listed as the current maintainer of the 
sysutils/bacula-server port.  I believe Dan is subscribed to this list and 
sometimes posts.

Cheers,

Paul.

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with connection in Bacula-Bat

2010-06-21 Thread Paul Mather
On Jun 21, 2010, at 6:43 AM, Cato Myhrhagen wrote:

 I have som problems getting Bacula-Bat working. Here is what i have done so 
 far:
  
 1. Installed FreeBSD 8.0 rel
 2. From the ports catalogue i have installed Gnome Lite
 3. Uppgraded all the ports with CVsup
 4. Then I installed Bacula 5.0.0.1 rel (with MySQL), altso from the ports 
 catalogue.

By this, do you mean the sysutils/bacula-server port?

 5. Installed BAT 5.0.0.1
  
 I then tried to start Bat by opening Gnome, starting a teminalvindow and 
 typing bat, but got error message (dont have it now becouse it stoped giving 
 me this message). I also notised that bacula-dir prosess stopped when i tried 
 to start Bat. Then i checked if MySQL was running, but it wasnt even 
 installed (i thought it would be installed together with Bacula, but no)

Only the MySQL client libraries are installed when you install Bacula.  The 
reason behind this is that you might be using a MySQL database on a different 
server to store your database.  If you are going to run a local MySQL server 
then you need to install an appropriate databases/mysql??-server port and 
configure that so it runs.

 Therefor i installed MySQL-server 5.0.90, started it and tried to run BAT 
 again. The program starts but dosent seem to work. It continues to try to 
 connect to the database i think. In the lover left corner of the BAT Gui i 
 continues to display the folloving: Connecting to Direbtor Localhost:9010 
 and then says Connection fails

After you installed the MySQL server did you run the scripts to create the 
databases and tables necessary for Bacula to run?  These are in 
/usr/local/share/bacula.  You need to run three scripts: 
create_bacula_database, make_bacula_tables, and grant_bacula_privileges before 
starting up Bacula for the first time.  The reason you are getting the error 
message about not being able to connect to the Director is that it probably 
died when it started up after not finding the correct tables in the database.

Note, also, you need to enable MySQL in /etc/rc.conf if you are using it as 
your back-end database, so that it runs at startup.  If it is not running, the 
Director will not be able to connect to the database.

Note, I use PostgreSQL as my back-end database.  I can easily check it is 
running via its startup script:

backup# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 1180)
/usr/local/bin/postgres -D /usr/local/pgsql/data

(There should be something similar for MySQL.)  Similarly, for the Bacula 
Director:

backup# /usr/local/etc/rc.d/bacula-dir status
bacula_dir is running as pid 1280.

Cheers,

Paul.


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem installing Bacula-BAT

2010-06-17 Thread Paul Mather
On Jun 17, 2010, at 7:24 AM, Cato Myhrhagen wrote:

 Hello
  
 First let me say that i am new to FreeBSD and Bacula, so my question might be 
 a bit trivial. Newertheless, I am having big problems installing Bacula BAT 
 on my FreeBSD server. Let me explain what i have done so far:
  
 1. Installed FreeBSD 8.0 rel
 2. From the ports catalogue i have installed Xorg
 3. Then I installed Bacula 3.0.2 rel (with MySQL), altso from the ports 
 catalogue.
  
 I then tried to install Backula-bat, but here i ran into trouble. First the 
 installation starts as it should and then goes on for quite a while. Then 
 suddenly it stops whith the following message:
  
 install: 
 /usr/ports/sysutils/bacula-bat/work/bacula-3.0.2/src/qt-console/.libs/bat: No 
 such file or directory
 *** Error code 71
 Stop in /usr/ports/sysutils/bacula-bat.
 *** Error code 1
 Stop in /usr/ports/sysutils/bacula-bat.

Is your ports tree up to date?  The current version of Bacula used in the 
sysutils/bacula-bat port is 5.0.0.  The fact it is using 3.0.2  for you 
suggests to me that your ports tree is at least partially outdated.

You can use portsnap fetch update to update your ports tree (assuming you are 
using portsnap, which comes with FreeBSD 8.0-RELEASE for keeping the ports tree 
up to date).

Cheers,

Paul.


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bscan-recovered catalogue not usable :-(

2010-06-02 Thread Paul Mather
About three months ago I began using Bacula (5.0.0 on FreeBSD 8-STABLE), 
initially backing up to disk, but with a view to backing up to tape in the near 
future.  To keep things simple, I initially used Sqlite for the catalogue.  
About a week ago, I decided to move to using PostgreSQL for the catalogue, as 
the production system I intend to deploy will be backing up many more files 
than the test deployment, and so I wanted to get some experience of Bacula + 
PostgreSQL.

That's when it all went to hell in a hand basket. :-)

I rebuilt Bacula with PostgreSQL as the catalogue back-end.  I could not get 
the sqlite2pgsql script to migrate my Sqlite catalogue successfully.  I then 
tried to recover the catalogue into PostgreSQL from the volumes via bscan.  
This was more successful, but I am still not left with working backups.

When I run a backup job, I get an e-mail intervention notice informing me 
Cannot find any appendable volumes.  When I listed my volumes of the job in 
question, bscan had left the one and only volume in the Archive state.  So, I 
changed this to Append using update volume in bconsole.  Unfortunately, 
this status quickly reverted to Error when the job tried to run.  I believe 
this is due to a mismatch between the volbytes size of the volume vs. the size 
of the volume on the file system:

*list media pool=File
+-++---+-+---+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes  | volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+---+--+--+-+--+---+---+-+
|   1 | TestVolume | Error |   1 | 1,263,847,514 |0 |   
31,536,000 |   1 |0 | 0 | File  | 2010-05-26 23:15:39 |
+-++---+-+---+--+--+-+--+---+---+-+

vs.

backup# ls -al /backups/bacula/TestVolume 
-rw-r-  1 bacula  bacula  1264553900 May 26 23:15 /backups/bacula/TestVolume

I would like to continue to use this volume for backups, as it is well under 
the 5 GB maximum volume bytes set in the pool definition.  Is this an 
unrealistic expectation for a bscan-recovered catalogue, or is there some 
simple way to get this volume recognised as appendable again?  (Is the 
expectation after bscan to start with a new volume?)

Is Bacula expected to act gracefully in the face of the loss/corruption of the 
catalogue?

Cheers,

Paul.



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Sqlite3 to PostgreSQL 8.4 catalogue migration

2010-05-28 Thread Paul Mather
Does anyone have a working script to migrate a Sqlite3 catalogue database to 
PostgreSQL 8.4.4?

I'm using a very recent FreeBSD 8-STABLE and the sqlite2pgsql script in the 
examples/database directory of the source code doesn't work for me.  Has anyone 
got this to work successfully under a current Bacula installation on FreeBSD?

I'm using these versions of the ports:

bacula-server-5.0.0
postgresql-client-8.4.4
postgresql-server-8.4.4
sqlite3-3.6.23.1_1

My current Bacula server is backing up to disk and, being a relatively recent 
install, none of the volumes have been recycled.  So, as a way of getting a 
working PostgreSQL catalogue in lieu of a non-working sqlite2pgsql script, I 
thought I would run bscan -s to recover catalogue information from the volumes 
(having first run create_postgresql_database, make_postgresql_tables, and 
grant_postgresql_privileges).  Will this recreate the catalogue entirely, or 
will I be missing something other than log data?

Cheers,

Paul.
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quantum SuperLoader 3 under Bacula on FreeBSD 8

2010-05-18 Thread Paul Mather
I am currently assembling a quote for an LTO-4 tape backup system.  So far, I 
am looking at using a 16-slot Quantum SuperLoader 3 with LTO-4HH drive as the 
tape unit.  Married to this will be a server to act as the backup server that 
will drive the tape unit using Bacula to manage backups.  The server will be a 
quad core X3440 system with 4 GB of RAM and four 1 TB SATA 7200 rpm hard drives 
in a case that has room for eight hot-swap drives.  I plan on using FreeBSD 8 
on the system, using ZFS to raidz the drives together to provide spool space 
for Bacula.  I will be using an Areca ARC-1300-4X PCIe SAS card to interface 
with the tape drive.

My main question is this: is the Quantum SuperLoader 3 LTO-4 tape drive 
supported by Bacula 5 on FreeBSD?  In particular, is the autoloader fully 
supported?  The Bacula documentation indicates the SuperLoader works fully 
under Bacula, though not explicitly whether under FreeBSD.

The backup server will serve a GigE network cluster of perhaps a dozen machines 
with over 6 TB of storage, most of which is on the cluster's NFS server.  Does 
anyone have good advice on sizing the spool/holding/disk pool for a Bacula 
server?  Is it imperative to have enough disk space to hold a full backup 
(i.e., 6 TB in this case), or is it sufficient to have enough space to maintain 
streaming to tape?  (I don't have much experience of Bacula, having used it 
only to back up to disk.)  In other words, do I need more 1 TB drives in my 
backup server?

Finally, is 4 GB of RAM sufficient for good performance with ZFS?  Will ZFS on 
FreeBSD be able to maintain full streaming speeds to tape, given the various 
reports of I/O stalls under ZFS reported recently?

Thanks in advance for any advice or information.

Cheers,

Paul.
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum SuperLoader 3 under Bacula on FreeBSD 8

2010-05-18 Thread Paul Mather
On May 18, 2010, at 9:35 AM, Robert Hartzell wrote:

 On Mon, 2010-05-17 at 15:08 -0400, Paul Mather wrote:
 I am currently assembling a quote for an LTO-4 tape backup system.  So far, 
 I am looking at using a 16-slot Quantum SuperLoader 3 with LTO-4HH drive as 
 the tape unit.  Married to this will be a server to act as the backup server 
 that will drive the tape unit using Bacula to manage backups.  The server 
 will be a quad core X3440 system with 4 GB of RAM and four 1 TB SATA 7200 
 rpm hard drives in a case that has room for eight hot-swap drives.  I plan 
 on using FreeBSD 8 on the system, using ZFS to raidz the drives together to 
 provide spool space for Bacula.  I will be using an Areca ARC-1300-4X PCIe 
 SAS card to interface with the tape drive.
 
 My main question is this: is the Quantum SuperLoader 3 LTO-4 tape drive 
 supported by Bacula 5 on FreeBSD?  In particular, is the autoloader fully 
 supported?  The Bacula documentation indicates the SuperLoader works fully 
 under Bacula, though not explicitly whether under FreeBSD.
 
 The backup server will serve a GigE network cluster of perhaps a dozen 
 machines with over 6 TB of storage, most of which is on the cluster's NFS 
 server.  Does anyone have good advice on sizing the spool/holding/disk pool 
 for a Bacula server?  Is it imperative to have enough disk space to hold a 
 full backup (i.e., 6 TB in this case), or is it sufficient to have enough 
 space to maintain streaming to tape?  (I don't have much experience of 
 Bacula, having used it only to back up to disk.)  In other words, do I need 
 more 1 TB drives in my backup server?
 
 Finally, is 4 GB of RAM sufficient for good performance with ZFS?  Will ZFS 
 on FreeBSD be able to maintain full streaming speeds to tape, given the 
 various reports of I/O stalls under ZFS reported recently?
 
 ZFS loves ram. More ram = better performance. I'm not at all familiar
 with zfs performance on feebsd but zfs version 13 that's used on freebsd
 8 is pretty old. ZFS is currently at version 22.

That must be on OpenSolaris.  My current Solaris 10 system reports using pool 
version 15.  FreeBSD 8-STABLE is now up to pool version 14, and I believe 
porting is currently underway to jump it up more pool levels.

 I/O stalls? Is that a freebsd issue?

Some folks have reported short, bursty I/O stalls during very intense write 
workloads.  I've seen it reported on FreeBSD, but also in those threads there 
has been mention of it happening on, e.g, OpenSolaris, too.  The general 
workaround advice currently on FreeBSD is to lower the vfs.zfs.txg.timeout 
kernel tuneable from its default of 30 seconds to something lower.  IIRC, this 
problem may also only affect systems with large ARC sizes.

Thanks for the RAM advice; I will try and bump it up.

Cheers,

Paul.


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems with bacula and list problem

2010-05-13 Thread Paul Bradley
Hi,

I seem to be having problems replying to my thread - I never see the reply
echoed back to me via the list. Here is my reply and original thread
message:


I can resolve/ping 'server' no problem.

iptables -L -n returns:

Chain INPUT (policy ACCEPT)
target prot opt source   destination
Chain FORWARD (policy ACCEPT)
target prot opt source   destination
Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
I just installed bacula on Ubuntu Gutsy (built bacula from source) and am
having a strange problem. In bconsole doing 'status client' (during the
initial test run as described at
http://www.bacula.org/5.0.x-manuals/en/main/main/Brief_Tutorial.html)
returns an error 'failed to connect to server-fd'

This is a test run and the FD, Director and SD are all on the same machine.

hosts.allow and hosts.deny are empty apart from comments.

I can telnet locally to port 9102.

'netstat -nlp e | grep 9102' returns:
tcp0  0 0.0.0.0:91020.0.0.0:*   LISTEN

bacula-fs.conf contains in the list of directors permitted a correct entry,
the password listed matches the password in the client entry in
bacula-dir.conf... here are the entries in those files with the obvious
editing:

IN BACULA-FD.CONF

Director {
  Name = server-dir
  Password = SOME PASSWORD (MATCHES OTHER ENTRY BELOW)
}

IN BACULA-DIR.CONF
Client {
  Name = server-fd
  Address = server
  FDPort = 9102
  Catalog = MyCatalog
  Password = SOME PASSWORD (MATCHES OTHER ENTRY ABOVE)  #
password$
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
}


How can I further debug this?
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem connecting to FD

2010-05-08 Thread Paul Bradley



 I can resolve/ping 'server' no problem.

 iptables -L -n returns:


 Chain INPUT (policy ACCEPT)
 target prot opt source   destination
 Chain FORWARD (policy ACCEPT)
 target prot opt source   destination
 Chain OUTPUT (policy ACCEPT)
 target prot opt source   destination

 Paul


 On Fri, May 7, 2010 at 6:40 AM, Thomas Mueller tho...@chaschperli.chwrote:

 Am Thu, 06 May 2010 19:57:59 +0100 schrieb Paul Bradley:

  IN BACULA-FD.CONF
 
 
  Director {
Name = server-dir
Password = SOME PASSWORD (MATCHES OTHER ENTRY BELOW)
  }
 
  IN BACULA-DIR.CONF
 
  Client {
Name = server-fd
Address = server
FDPort = 9102
Catalog = MyCatalog
Password = SOME PASSWORD (MATCHES OTHER ENTRY ABOVE)  #
  password$
File Retention = 30 days# 30 days Job Retention = 6 months
   # six months AutoPrune = yes # Prune
expired Jobs/Files
  }
 
 
  How can I further debug this?
 
  thanks

 you provided server as address. can you sucessfully lookup the name ?
 (ping server and/or host server)?

 is a firewall on (check: iptables -L -n)?

 - Thomas






--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem connecting to FD

2010-05-07 Thread Paul Bradley
Hi Thomas,

I can resolve/ping 'server' no problem.

iptables -L -n returns:


Chain INPUT (policy ACCEPT)
target prot opt source   destination
Chain FORWARD (policy ACCEPT)
target prot opt source   destination
Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

Paul


On Fri, May 7, 2010 at 6:40 AM, Thomas Mueller tho...@chaschperli.chwrote:

 Am Thu, 06 May 2010 19:57:59 +0100 schrieb Paul Bradley:

  IN BACULA-FD.CONF
 
 
  Director {
Name = server-dir
Password = SOME PASSWORD (MATCHES OTHER ENTRY BELOW)
  }
 
  IN BACULA-DIR.CONF
 
  Client {
Name = server-fd
Address = server
FDPort = 9102
Catalog = MyCatalog
Password = SOME PASSWORD (MATCHES OTHER ENTRY ABOVE)  #
  password$
File Retention = 30 days# 30 days Job Retention = 6 months
   # six months AutoPrune = yes # Prune
expired Jobs/Files
  }
 
 
  How can I further debug this?
 
  thanks

 you provided server as address. can you sucessfully lookup the name ?
 (ping server and/or host server)?

 is a firewall on (check: iptables -L -n)?

 - Thomas



 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem connecting to FD

2010-05-06 Thread Paul Bradley


 I just installed bacula on Ubuntu Gutsy (built bacula from source) and am
 having a strange problem. In bconsole doing 'status client' (during the
 initial test run as described at
 http://www.bacula.org/5.0.x-manuals/en/main/main/Brief_Tutorial.html)
 returns an error 'failed to connect to server-fd'

 This is a test run and the FD, Director and SD are all on the same machine.


 hosts.allow and hosts.deny are empty apart from comments.

 I can telnet locally to port 9102.

 'netstat -nlp e | grep 9102' returns:
 tcp0  0 0.0.0.0:91020.0.0.0:*
 LISTEN

 bacula-fs.conf contains in the list of directors permitted a correct entry,
 the password listed matches the password in the client entry in
 bacula-dir.conf... here are the entries in those files with the obvious
 editing:

 IN BACULA-FD.CONF


 Director {
   Name = server-dir
   Password = SOME PASSWORD (MATCHES OTHER ENTRY BELOW)
 }

 IN BACULA-DIR.CONF

 Client {
   Name = server-fd
   Address = server
   FDPort = 9102
   Catalog = MyCatalog
   Password = SOME PASSWORD (MATCHES OTHER ENTRY ABOVE)  #
 password$
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }


 How can I further debug this?

 thanks


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature idea feeler - bconsole include / grep

2010-05-06 Thread Paul Binkley
Would totally love to be able to use grep within bconsole...was wishing for
that earlier today. 

-Original Message-
From: Steve Polyack [mailto:kor...@comcast.net] 
Sent: Thursday, May 06, 2010 1:17 PM
To: bacula-de...@lists.sourceforge.net
Cc: bacula-users
Subject: [Bacula-users] Feature idea feeler - bconsole include / grep

This is just a feeler to see if there would be any interest in the following
feature:

Now that we have tab completion within bconsole, an additional option has
come to mind which you can see in various other CLI environments (IOS, *NIX
shells): the ability to pipe the output of commands through a various set of
filters.

In the *NIX shell environment you can simply pipe the output of any program
to 'grep' to filter the output.  Cisco IOS allows you to do the same thing,
along with the 'begin' option, which begins output after encountering the
following token.

I believe this would be a nice feature to have for bconsole.  For example,
if I want to see if particular files/directories were backed up for a job I
could simply perform a '*list files jobid=999 | include
filenameImLookingFor'.  include could be interchangeable with grep.  
I realize that I can accomplish the same thing with SQL queries or submit
bconsole commands via a shell and filtering the output, but this still seems
like it would streamline bconsole usage.

Thoughts?
-Steve Polyack


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] switching between RAID-packs, marking Volume in Error

2010-03-29 Thread Paul Binkley
Hi Folks,

I have a backup scheme that has been working well, with 1 issue. We write to
a 4.5 TB RAID pack for 30 days, then cycle it as offsite backup, replacing
it with an identical array.

Now obviously each RAID pack has a different set of volumes, so when I
switch them, the first job can't seem to figure out which volume to use and
it marks the one it expects as Error in the catalog. I have to force it to
use a Volume on the new RAID and then it seems to work fine until I rotate
it again. I am concerned that this one Volume is getting marked as
erroneous, and would also like this to be as hands-off as possible. Has
anyone done this sort of thing before, and did you experience similar
issues?

I may be missing something fundamental and your input is appreciated.

SD and Director are both running 3.0.3 on separate CentOS 5.3 machines.

Thanks!



--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula TUI?

2010-03-19 Thread Paul Binkley
I use bconsole daily, in a pure text/command line environment (PuTTY). It
works great! 

-Original Message-
From: Moray Henderson [mailto:moray.hender...@ict-software.org] 
Sent: Friday, March 19, 2010 8:08 AM
To: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula TUI?

Wouter Verhelst wrote:
On Fri, Mar 19, 2010 at 10:00:53AM +, Moray Henderson wrote:
 Hello everyone,

 Would there be any interest in a text user interface configuration 
 program for Bacula on Linux?  We use an in-house system
administration
 menu system for our Linux servers, and are considering switching our 
 backup software to Bacula and writing a text-mode UI for it.  If this 
 does go ahead, I was wondering if it could be useful in the wider 
 community.

Eh, there's bacula-console which does that already? Or did you have 
something else in mind?

As far as I can make out, bacula-console requires X-Windows, which we are
not running.  I haven't seen anything suggesting it will run on a text
console.  If it does, it would simplify my job enormously.


Moray.
To err is human.  To purr, feline






--
Download Intel#174; Parallel Studio Eval Try the new software tools for
yourself. Speed compiling, find bugs proactively, and fine-tune applications
for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ERR=A Required Privilege Is Not Held By The Client...

2010-02-16 Thread Paul Binkley
Hi All,

Director is 3.0.2, backing up a 32bit Windows Vista client running 3.0.3.

After adding onefs=no to the FileSet options in the director, I get the
error messages during the backup:

Cannot open C:/Documents and Settings/.../:ERR=A required privilege is not
held by the client.

And,

Could not open directory C:/Documents and Settings/.../:ERR=Access is
denied

What do I need to do to allow bacula to backup these directories?

Many Thanks,
Paul



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5 install problems

2010-01-28 Thread Paul
On 01/28/2010 06:54 PM, Firestrm wrote:
 
 Is anyone else having install problems with Bacula 5 for Windows 64bit?
 
 I'm running Windows 7 64bit (pro) and Bacula 3.2 worked fine. Although the
 install had it's quirks, I was able to fix the file permissions so
bacula-fd
 would run.
 
 Now however I'm not having any file permission problems but the bacula-fd
 service will not start. It errors out with Access is Denied and yes
before
 you ask I'm running as the administrator.
 
 Thank You for any help...

Did you read the changelog and readme before.

Have you notice the necessary uninstall any previous version line ?
Did you did this ?

When you say I'm running as administror did you state the bacula-fd is
running with the administrator account
or just your session ?

Thanks for the reply Bruno...

To answer your questions for clarity I'm running on an account that's part
of the administrators group. And I did come across the uninstall previous
line in the readme. So this attempt was on a fresh install.

The problem with Bacula-fd not wanting to start ended up not being a
permissions problem but instead is a problem with the installer adding
incorrect monitor information. If you comment out that section again the
file director service will run properly.


 

__ Information from ESET NOD32 Antivirus, version of virus signature
database 4815 (20100128) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com
 


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to apply different retention periods to different jobs on same client

2010-01-22 Thread Paul Binkley
Hi all,

I have a situation where I need to apply different retention times to a
variety of jobs on a single client. I tried to define the retention periods
in the Job but it doesn't work (bacula doesn't like it there).

For example, on one machine I have a set of data that I want to be replaced
on a weekly basis, while everything else has a 30 day retention. Can this be
done?

 

Thanks!

 

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 3.0.3 FD Crashes on Solaris/x86

2010-01-20 Thread Paul Greidanus
I just upgraded to 3.0.3 on my Director/SD, and found that the FD crashes every 
time I try to take a backup. Restores to the FD seem to work. Installing a 
client-only FD from 3.0.1 seems to work just fine though.

Not sure exactly how to report this, or diagnose it better then this, but it is 
consistent across 2 different Solaris machines.
--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Upgrading/migrating bacula director 2.4.4 to 3.0.2 on new machine

2010-01-12 Thread Paul Binkley
Hi all,

 

I wanted to follow up and share my experience with this subject, and ask for
any input about problems I can expect.

 

I seem to have gotten my 2.4.4 director and catalog upgraded to 3.0.2 by
doing the following:

*   Installed Bacula 3.0.2 on CentOS 5.4;
*   Created the empty sqlite DB;
*   Copied bacula.db from production machine (fedora 11) to new machine;
*   Ran update_bacula_tables on the copied DB;
*   Test to make sure director will start;
*   Copied appropriate .conf files (after backing up default ones);
*   Modified paths in .conf files as needed;
*   Disable production director and shut down server;
*   Gave new machine the IP of the production machine;
*   Restart director;
*   Use bconsole to verify that the catalog is complete;
*   Run a job;

 

Anything I am overlooking?

 

Thanks!

 

  _  

From: Paul Binkley [mailto:paul.bink...@ois.com] 
Sent: Monday, January 11, 2010 11:49 AM
To: 'bacula-users@lists.sourceforge.net'
Subject: 

 

 

Hi list,

 

I have been using Bacula 2.4.4 in production for several months now, so I
have a robust set of backup volumes. I now have a new machine (CentOS 5.4)
up and running with Bacula 3.0.2 installed. Besides fixing up the conf
files, how do I make the new installation work with the existing data? I
don't want to lose access to the 5TB of backed up data.

 

Is there a way to upgrade the existing database and plug it in to the new
installation while retaining all the important information that Bacula needs
to continue using/recycling volumes or accessing existing volumes for
restore?

 

 

Thanks much!

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] (no subject)

2010-01-11 Thread Paul Binkley
 

Hi list,

 

I have been using Bacula 2.4.4 in production for several months now, so I
have a robust set of backup volumes. I now have a new machine (CentOS 5.4)
up and running with Bacula 3.0.2 installed. Besides fixing up the conf
files, how do I make the new installation work with the existing data? I
don't want to lose access to the 5TB of backed up data.

 

Is there a way to upgrade the existing database and plug it in to the new
installation while retaining all the important information that Bacula needs
to continue using/recycling volumes or accessing existing volumes for
restore?

 

 

Thanks much!

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus

On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:

 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 0, 
 status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 

All I use is preallocated disks especially on NFS.. I don't think I can 
actually use sparse disks on NFS.

As a test, I created a 100Gb file from /dev/zero, and tried backing that up and 
restoring it, and I get this:

29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
TL2000-1 (/dev/rmt/0n), Volume 10L4
29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
66365161472, restored 376340827.

So, this tells me that whatever's going on, it's not Vmware that's causing me 
the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
backups, or just something with large files and Bacula?

Paul
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
enabled with compression or dedup enabled.

Can you try backing up and restoring a 100Gb file full of zeros from a snapshot?

Paul

On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:

 What solaris are u using?
 Is zfs compression/ dedup enabled?
 Maybe I could run some test for u. I had no problems with zfs so far
 
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 21:52
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
 
 
 On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
 
 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 
 0, status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 
 
 All I use is preallocated disks especially on NFS.. I don't think I can 
 actually use sparse disks on NFS.
 
 As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
 and restoring it, and I get this:
 
 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
 file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
 TL2000-1 (/dev/rmt/0n), Volume 10L4
 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
 file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
 66365161472, restored 376340827.
 
 So, this tells me that whatever's going on, it's not Vmware that's causing me 
 the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
 backups, or just something with large files and Bacula?
 
 Paul
 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
Hi Julian,

Here's the info for that filesystem.  I also just tried my 100Gb test, which 
fails both on the filesystem itself, and the snapshot.  I don't have problems 
with 1Gb files either... 

NAME   PROPERTYVALUE   
SOURCE
rpool/vm2  typefilesystem  -
rpool/vm2  creationFri Nov  6 14:47 2009   -
rpool/vm2  used116G-
rpool/vm2  available   751G-
rpool/vm2  referenced  116G-
rpool/vm2  compressratio   1.00x   -
rpool/vm2  mounted yes -
rpool/vm2  quota   none
default
rpool/vm2  reservation none
default
rpool/vm2  recordsize  128K
default
rpool/vm2  mountpoint  /rpool/vm2  
default
rpool/vm2  sharenfsrw,root=vmsrv2,anon=0   local
rpool/vm2  checksumon  
default
rpool/vm2  compression off 
default
rpool/vm2  atime   on  
default
rpool/vm2  devices on  
default
rpool/vm2  execon  
default
rpool/vm2  setuid  on  
default
rpool/vm2  readonlyoff 
default
rpool/vm2  zoned   off 
default
rpool/vm2  snapdir hidden  
default
rpool/vm2  aclmode groupmask   
default
rpool/vm2  aclinherit  restricted  
default
rpool/vm2  canmounton  
default
rpool/vm2  shareiscsi  off 
default
rpool/vm2  xattr   on  
default
rpool/vm2  copies  1   
default
rpool/vm2  version 3   -
rpool/vm2  utf8onlyoff -
rpool/vm2  normalization   none-
rpool/vm2  casesensitivity sensitive   -
rpool/vm2  vscan   off 
default
rpool/vm2  nbmand  off 
default
rpool/vm2  sharesmboff 
default
rpool/vm2  refquotanone
default
rpool/vm2  refreservation  none
default
rpool/vm2  primarycacheall 
default
rpool/vm2  secondarycache  all 
default
rpool/vm2  usedbysnapshots 14.9M   -
rpool/vm2  usedbydataset   116G-
rpool/vm2  usedbychildren  0   -
rpool/vm2  usedbyrefreservation0   -
rpool/vm2  org.opensolaris.caiman:install  ready   
inherited from rpool

On 2009-12-29, at 4:11 PM, Fahrer, Julian wrote:

 Hey paul,
 
 i don't have enough space on the test system right now. Just created a new 
 zfs without compression/dedup and a 1gb file on an solaris 10u6 system.
 I could backup and restore from a snapshot without errors.
 
 Could u post your zfs config?
 zfs get all zfs-name
 
 Julian
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 23:00
 An: Fahrer, Julian
 Cc: bacula-users@lists.sourceforge.net
 Betreff: Re: AW: [Bacula-users] Cannot restore VMmware/ZFS
 
 Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
 enabled with compression or dedup enabled.
 
 Can you try backing up and restoring a 100Gb file full of zeros from a 
 snapshot?
 
 Paul
 
 On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:
 
 What solaris are u using?
 Is zfs compression/ dedup enabled?
 Maybe I could run some test for u. I had no problems with zfs so far
 
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 21:52
 An: bacula-users@lists.sourceforge.net
 Betreff: Re

[Bacula-users] Cannot restore VMmware/ZFS

2009-12-28 Thread Paul Greidanus
I'm trying to restore files I have backed up on the NFS server that I'm using 
to back VMware, but I'm getting similar errors to this every time I try to 
restore:

28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
Restore.2009-12-28_12.10.28_54
28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 11, 
drive 0 command.
28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
drive 0 command.
28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 0, 
status is OK.
28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
device TL2000-1 (/dev/rmt/0n).
28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
file:block 473:0.
28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 at 
file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
TL2000-1 (/dev/rmt/0n), Volume 09L4
28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of restored 
file /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk 
not correct. Original 8589934592, restored 445841408.

Files are backed up from a zfs snapshot which is created just before the backup 
starts. Every other file I am attempting to restore works just fine... 

Is anyone out there doing ZFS snapshots for VMware, or backing up NFS servers 
that have .vmdk files on it?

Thanks

Paul--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ERR=Table 'batch' is marked as crashed and should be repaired

2009-12-26 Thread paul
I've been getting this error on some of my backups (not all).   The full 
error is:

26-Dec 01:00 Arwen-dir JobId 11: Fatal error: Can't fill Path table 
Query failed: INSERT INTO Path (Path) SELECT a.Path FROM (SELECT 
DISTINCT Path FROM batch) AS a WHERE NOT EXISTS (SELECT Path FROM Path 
AS p WHERE p.Path = a.Path): ERR=Table 'batch' is marked as crashed and 
should be repaired

26-Dec 01:00 Arwen-dir JobId 11: Error: Bacula Arwen-dir 2.4.2 
(26Jul08): 26-Dec-2009 01:00:34
  Build OS:   x86_64-pc-linux-gnu debian lenny/sid
  JobId:  11
  Job:Arwen_XP64_Backup_Set_2.2009-12-25_20.33.10
  Backup Level:   Full
  Client: Arwen_XP64 2.4.2 (26Jul08) 
x86_64-pc-linux-gnu,debian,lenny/sid
  FileSet:Arwen_XP64 2009-12-25 20:33:35
  Pool:   Backup_Set_2 (From Job resource)
  Storage:Backup_Set_2 (From Pool resource)
  Scheduled time: 25-Dec-2009 20:33:26
  Start time: 26-Dec-2009 00:14:57
  End time:   26-Dec-2009 01:00:34
  Elapsed time:   45 mins 37 secs
  Priority:   16
  FD Files Written:   139,974
  SD Files Written:   139,974
  FD Bytes Written:   18,201,877,756 (18.20 GB)
  SD Bytes Written:   18,228,549,111 (18.22 GB)
  Rate:   6650.3 KB/s
  Software Compression:   39.6 %
  VSS:no
  Storage Encryption: no
  Volume name(s): Backup_Set_2_0003
  Volume Session Id:  6
  Volume Session Time:1261630416
  Last Volume Bytes:  185,551,543,549 (185.5 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:*** Backup Error ***

I'm running a rotating set of 3 pools on a monthly cycle.   This happens 
ONLY with the 2nd pool and only with 2 of the jobs.   I thought at first 
this might be a MySQL timeout, but this job was only 45 minutes in 
length while another job that takes 28 hours doesn't hit this problem.   
And this job with one of the other pools doesn't hit this error.

I've tried running bscan to correct the catalog, I've tried dumping the 
entire catalog and redoing the backups.   Still get the error.   Result 
is that I can't do any incremental backups of this job since the job is 
never marked as having a full backup.

This problem was introduced, I think, by running out of space in root.   
This problem has since been resolved - root is now running at 46% 
utilization.

Any thoughts on a repair strategy?

Thanx,

Paul Howard







--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [Fwd: Re: ERR=Table 'batch' is marked as crashed and should be repaired]

2009-12-26 Thread paul



 Original Message 
Subject: 	Re: [Bacula-users] ERR=Table 'batch' is marked as crashed and 
should be repaired

Date:   Sat, 26 Dec 2009 12:29:21 -0500
From:   Dan Langille d...@langille.org
To: paul p...@sealandskyphoto.com
References: 	4b363942.2080...@sealandskyphoto.com 
4b3641ce.5060...@langille.org 4b364768.6040...@sealandskyphoto.com




All this needs to be on the list.

paul wrote:

Dan,

I've done all of that ;-(.   When that didn't work, I dumped the tables, 
recreated them from scratch, and re-ran the backups.   Still get the 
failure.
The problem does appear to be the table 'batch' which is a temporary, 
transient table - hence hard to repair.   Since I dumped the tables and 
recreated from scratch, it would seem that there is likely some state 
outside the tables that is surviving the table dump and re-create and is 
ending up causing the failure to occur.


I've put in the timeouts for mysql recommended by 
http://wiki.bacula.org/doku.php?id=faq  (ref Fixing table 'bacula.batch' 
doesn't exist).   And I'm trying again.


Failing that, I think I'll trash the state files from the bacula 
working directory (same page, ref: How to Clear you Console History).


Other than that, I'm out of ideas.

Paul



Dan Langille wrote:

paul wrote:
I've been getting this error on some of my backups (not all).   The 
full error is:


26-Dec 01:00 Arwen-dir JobId 11: Fatal error: Can't fill Path table 
Query failed: INSERT INTO Path (Path) SELECT a.Path FROM (SELECT 
DISTINCT Path FROM batch) AS a WHERE NOT EXISTS (SELECT Path FROM 
Path AS p WHERE p.Path = a.Path): ERR=Table 'batch' is marked as 
crashed and should be repaired


26-Dec 01:00 Arwen-dir JobId 11: Error: Bacula Arwen-dir 2.4.2 
(26Jul08): 26-Dec-2009 01:00:34

  Build OS:   x86_64-pc-linux-gnu debian lenny/sid
  JobId:  11
  Job:Arwen_XP64_Backup_Set_2.2009-12-25_20.33.10
  Backup Level:   Full
  Client: Arwen_XP64 2.4.2 (26Jul08) 
x86_64-pc-linux-gnu,debian,lenny/sid

  FileSet:Arwen_XP64 2009-12-25 20:33:35
  Pool:   Backup_Set_2 (From Job resource)
  Storage:Backup_Set_2 (From Pool resource)
  Scheduled time: 25-Dec-2009 20:33:26
  Start time: 26-Dec-2009 00:14:57
  End time:   26-Dec-2009 01:00:34
  Elapsed time:   45 mins 37 secs
  Priority:   16
  FD Files Written:   139,974
  SD Files Written:   139,974
  FD Bytes Written:   18,201,877,756 (18.20 GB)
  SD Bytes Written:   18,228,549,111 (18.22 GB)
  Rate:   6650.3 KB/s
  Software Compression:   39.6 %
  VSS:no
  Storage Encryption: no
  Volume name(s): Backup_Set_2_0003
  Volume Session Id:  6
  Volume Session Time:1261630416
  Last Volume Bytes:  185,551,543,549 (185.5 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:*** Backup Error ***

I'm running a rotating set of 3 pools on a monthly cycle.   This 
happens ONLY with the 2nd pool and only with 2 of the jobs.   I 
thought at first this might be a MySQL timeout, but this job was only 
45 minutes in length while another job that takes 28 hours doesn't 
hit this problem.   And this job with one of the other pools doesn't 
hit this error.


I've tried running bscan to correct the catalog, I've tried dumping 
the entire catalog and redoing the backups.   Still get the error.   
Result is that I can't do any incremental backups of this job since 
the job is never marked as having a full backup.


This problem was introduced, I think, by running out of space in 
root.   This problem has since been resolved - root is now running at 
46% utilization.


Any thoughts on a repair strategy?


I think the relevant error is: Table 'batch' is marked as crashed and 
should be repaired


I would:

- stop bacula
- do a backup of the database
- find out how to repair a MySQL database table and do so
- run dbcheck
- hope




--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] dependency resolution issues installing 3.0.2

2009-12-02 Thread Paul Binkley
Hi Folks,

Has anyone seen this dependency error before, and if so, what am I doing
wrong?

libpython2.5.so.1.0 is needed by package Bacula-sqlite-3.0.2-1.i386
(/Bacula-sqlite-3.0.2-1.fc10.i386)
libcrypto.so.7 is needed by package Bacula-sqlite-3.0.2-1.i386
(/Bacula-sqlite-3.0.2-1.fc10.i386)
libssl.so.7 is needed by package Bacula-sqlite-3.0.2-1.i386
(/Bacula-sqlite-3.0.2-1.fc10.i386)

I found these similar files on my system.
/usr/lib64/libpython2.5.so.1.0
/usr/lib64/libcrypto.so.7
/usr/lib64/libssl.so.7

Is the problem that Bacula is 32 bit and these dependencies are x64? How can
I get around this?

Thanks!

BTW, The host machine is Fedora 11 x64. I have been running Bacula 2.4.4 on
this machine without issue for months but need to upgrade


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Possible to downgrade sd/fd from 3.0 to 2.4?

2009-11-30 Thread Paul Binkley
Hi Folks,

I hope someone can help. I have been using Bacula for a couple of months now
running 2.4 on the director and all client daemons, and 3.0 for the storage
daemon (my mistake). This has been working fine. The 3.0 file daemon isn't
compatible with the 2.4 director, so I need to downgrade.

It seems to be much more trouble to upgrade all the clients and the
director, so downgrading the sd seems to be the best option. Does anyone
have experience with this? What kind of trouble will I be getting into?

Thanks!

Paul E. Binkley
Objective Interface Systems, Inc.
220 Spring St., Suite 530
Herndon, VA 20170
Tel: 703.295.6528
Fax: 703.296.6501
paul.bink...@ois.com
http://www.ois.com
 
***
This email contains information confidential and proprietary
to Objective Interface Systems, Inc. and should not be
disclosed to third parties without written permission from
Objective Interface Systems, Inc. If this was delivered to
someone other than the intended recipient by mistake please
notify le...@ois.com and remove any remaining copies of this
email.
***


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Possible to downgrade sd/fd from 3.0 to 2.4?

2009-11-30 Thread Paul Binkley
Sorry, Director is running 2.4.4 on CentOS 5.3. Storage Daemon is running
3.0.2 on a different CentOS 5.3 machine. Director and Storage are fine
together, but the 2.4.4 director cannot access the 3.0.2 fd (running on the
sd machine). This is all with sqlite.

Do you think upgrading the director is a better solution over the long-term
since the 3.0 director can talk to 2.4 clients?

Of note, I am having dependency problems while trying to upgrade director
from 2.4 to 3.0. Different issue I guess. I will pose that question in a new
thread based on your advice here.

Thanks!

Paul E. Binkley
Objective Interface Systems, Inc.
220 Spring St., Suite 530
Herndon, VA 20170
Tel: 703.295.6528
Fax: 703.296.6501
paul.bink...@ois.com
http://www.ois.com
 
***
This email contains information confidential and proprietary
to Objective Interface Systems, Inc. and should not be
disclosed to third parties without written permission from
Objective Interface Systems, Inc. If this was delivered to
someone other than the intended recipient by mistake please
notify le...@ois.com and remove any remaining copies of this
email.
***

-Original Message-
From: John Drescher [mailto:dresche...@gmail.com] 
Sent: Monday, November 30, 2009 10:03 AM
To: Paul Binkley
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Possible to downgrade sd/fd from 3.0 to 2.4?

On Mon, Nov 30, 2009 at 9:45 AM, Paul Binkley paul.bink...@ois.com wrote:
 Hi Folks,

 I hope someone can help. I have been using Bacula for a couple of months
now
 running 2.4 on the director and all client daemons, and 3.0 for the
storage
 daemon (my mistake). This has been working fine. The 3.0 file daemon isn't
 compatible with the 2.4 director, so I need to downgrade.

 It seems to be much more trouble to upgrade all the clients and the
 director, so downgrading the sd seems to be the best option. Does anyone
 have experience with this? What kind of trouble will I be getting into?


You did not say what OS your 3.0 client was on. Also a 3.0 director
can talk to a 2.4.X client. At home I have this setup.

John


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Director connection failure -- FD and SD on same machine

2009-11-19 Thread Paul Binkley
Greetings,

 

I am having an issue in that the director cannot communicate with a
particular file daemon. The file daemon in question is running on the same
machine as the storage daemon.

 

If the file daemon is not running, the director has no problem with any
other file daemons, but if the fd in question is running, the director can
communicate with other clients only until a failed try with the problem fd.

 

The included console output shows this anomaly. Notice that the estimate
works fine for 2 different clients, then fails when trying to access the
bad client, then can no longer access the previously good clients.

 

Also possibly of note is the fact that the Fatal Error line shows that the
director seems to be stuck on the bad fd (xxx.xxx.0.70:9102) after the
original failure.

 

Thanks for any suggestions.

 

#estimate

.

Using Catalog MyCatalog

Connecting to Client bacula-fd at xxx.xxx.0.71:9102

2000 OK estimate files=197593 bytes=20,445,781,245

#estimate

.

Using Catalog MyCatalog

Connecting to Client MISD620-fd at xxx.xxx.61.7:9102

2000 OK estimate files=100996 bytes=40,012,706,471

#estimate

.

Using Catalog MyCatalog

Connecting to Client remotefs-fd at xxx.xxx.0.70:9102

Failed to connect to Client.

19-Nov 15:24 bacula-dir JobId 0: Fatal error: File daemon at 
xxx.xxx.0.70:9102 rejected Hello command

#estimate

.

Using Catalog MyCatalog

Connecting to Client bacula-fd at xxx.xxx.0.71:9102

Failed to connect to Client.

19-Nov 15:25 bacula-dir JobId 0: Fatal error: Unable to authenticate with
File daemon at  xxx.xxx.0.70:9102. Possible causes:

Passwords or names not the same or

Maximum Concurrent Jobs exceeded on the FD or

FD networking messed up (restart daemon).

Please see http://www.bacula.org/rel-manual/faq.html#AuthorizationErrors for
help.

#estimate

.

Using Catalog MyCatalog

Connecting to Client MISD620-fd at xxx.xxx.61.7:9102

Failed to connect to Client.

19-Nov 15:25 bacula-dir JobId 0: Fatal error: Unable to authenticate with
File daemon at  xxx.xxx.0.70:9102. Possible causes:

Passwords or names not the same or

Maximum Concurrent Jobs exceeded on the FD or

FD networking messed up (restart daemon).

Please see http://www.bacula.org/rel-manual/faq.html#AuthorizationErrors for
help.

#

 

Paul E. Binkley

Objective Interface Systems, Inc.

220 Spring St., Suite 530

Herndon, VA 20170

Tel: 703.295.6528

Fax: 703.296.6501

paul.bink...@ois.com

http://www.ois.com

 

***

This email contains information confidential and proprietary

to Objective Interface Systems, Inc. and should not be

disclosed to third parties without written permission from

Objective Interface Systems, Inc. If this was delivered to

someone other than the intended recipient by mistake please

notify  mailto:le...@ois.com le...@ois.com and remove any remaining copies
of this

email.

***

 

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Expiration question

2009-11-02 Thread Paul
Hello everybody

I have two simple questions about expiration.
I use Bacula 3.02 on a CentOS 5.3 system. My backup schedule maybe looks a bit 
standard and is as follows: Full backup on the first sunday of the month. 
Expiration = 1/2 year. Every other sunday a differential backup. Expiration 40 
day's and for all other day's (every mon., tue., wed., th, fr. and sat.) 
incremental backups with a 6 days expiration. I backup to disk.
Question one: 
Everything is running smoothly if you start your first backup on the first 
sunday of a month but there is a problem if you don't. Why??? Suppose my first 
backup is on a monday. Then an incremental backup is scheduled but there is no 
previous backup found so the systems runs a full backup but gets the level I. 
Full backup's supposed to have a rentention of 1/2 year but in this case i 
loose the full backup in 6 day's. Why has the first inremental backup which 
actually is a full backup I as backup level?
Question two: 
I have in mind to use migration to save disk space. 
After the full backup has run on the first sunday of the month i perform a 
migrate to move the data to an LTO2 tapedrive to save diskspace. I actually do 
not understand what volume expiration i should handle in the particular pool on 
disk. I thought it was a good thing to set the expiration of full backups back 
from 1/2 year to 5 days. In those 5 days i migrate the data from disk to a tape 
pool. That tape pool has a volume retention of a 1/2 year. Why is the full 
(migrated) disk backup not purged after 5 days?? It's on tape now with an 
expiration of 1/2 year. 
Thanks a lot for your time.
Rgds, Paul



--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore encrypted backup to remote machine - Error: Missing private key required to decrypt encrypted backup data.

2009-10-14 Thread Paul Armstrong
At 2009-10-14T10:11-0400, Ralph Harmil wrote:
I tried to restore an encrypted backup of a remote machine locally
without luck.  The error I get is:
Error: Missing private key required to decrypt encrypted backup
data.
I have tried putting the client keys every which way into the director
config file but can't seem to figure out what I am missing.

You need to change the keys in the file daemon (not the director) on the
host you're restoring.

Paul

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Howto limit job concurrency on individual tape drives within a library

2009-03-28 Thread Paul Hanson
Currently we have an IBM TS3200 working very well over fibre channel and
has two Ultrium 4 tape units. If I set concurrency to two (2) then both
tape units can work fine. However, if only one tape unit is in operation
and two jobs start for the SAME tape pool, then the jobs are interlaced
and I would prefer the jobs to be ran sequentially.

 

What options do I have for controlling concurrency to get the best from
the two tape unit library??

 

Regards



***Disclaimer***

This email and any attachments may contain confidential and/or privileged 
material; it is for the intended addressee(s) only. If you are not a named 
addressee, you must not use, retain or disclose such information. 

The Waterdale Group (3477263), comprising of CPiO Limited (2488682), Ardent 
Solutions Limited (4807328), eSpida Limited (4021203), Advanced Digital 
Technology Limited (1750478) and Intellisell (6070355) whilst taking all 
reasonable precautions against email or attachment viruses, cannot guarantee 
the safety of this email.

The views expressed in this email are those of the originator and do not 
necessarily represent the views of The Waterdale Group

Nothing in this email shall bind The Waterdale Group in any contract or 
obligation.  

For further information visit www.waterdalegroup.co.uk 
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbee questions client/server + GUI

2008-11-17 Thread Paul Cable
 

 -Original Message-
 From: Johnny SSH [mailto:[EMAIL PROTECTED] 
 Sent: Monday, November 17, 2008 9:30 AM
 To: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Newbee questions client/server + GUI
 
 
 I am using standard Debian Etch version which I think is v1.38?
 
 I have bacula-web installed, so I can view the backups 
 however I tried looking for bat and within my apt repos it 
 says it needs v.2.4 even though the common version is much 
 older?? Maybe a Debian glitch.
 
 I also tried to check out bweb and brestore but couldn't find 
 them anywhere?
 After Google'ing around I found some reminisants of them in 
 SourceForge but nothing that I could install??
 
 Also just to make the first reply clear:
 
 [2. For my multiple clients do I need to use multiple 
 pools or will 1
 pool
 store data for each machine individually?
 
 
 
 I am using one pool for each client
 ]
 
 What is the advantage of using a pool per client or is this a 
 necessaty? I managed to install bacula-console-wx and view my 
 file lists from there however I tried restoring one to a 
 client and the program seg faulted on me.
 Is that the result of single pool I wonder?
 
 Thanks
 
 
 
 John Drescher-2 wrote:
  
  4. I am confused between the GUI's of Bacula, on the wiki it says 
  that there is something called bat. How can I obtain this and what 
  command do I need to issue to view it? (my servers don't 
 have GUI's 
  so I will need to do this via a remote X session through SSH!)
 
  Do you have a linux client on the network? Are you using 
 bacula 2.2 or 
  better?
  
  John
  
  
 --
  --- This SF.Net email is sponsored by the Moblin Your Move 
 Developer's 
  challenge Build the coolest Linux based applications with 
 Moblin SDK  
  win great prizes Grand prize is a trip for two to an Open 
 Source event 
  anywhere in the world 
  http://moblin-contest.org/redirect.php?banner_id=100url=/
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
  
  
 
 --
 View this message in context: 
 http://www.nabble.com/Newbee-questions-client-server-%2B-GUI-t
 p20529490p20540458.html
 Sent from the Bacula - Users mailing list archive at Nabble.com.
 
 
 --
 ---
 This SF.Net email is sponsored by the Moblin Your Move 
 Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  
 win great prizes
 Grand prize is a trip for two to an Open Source event 
 anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 


I went in an enabled the Debian unstable repository and did an update-all.
Then I just compiled from the tar ball again and it installed bat right
along with the main bacula, iirc. Debian stable doesn't have some of the
library's required to run the newest bacula if I remember right, there was a
dependency missing when I tried to install 2.4.3 on stable. There's probably
a better way than just update-all in an unstable repository, but its been
running fine for me for about a week with no reboots (still in a test
environment with only one client though)



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbee questions client/server + GUI

2008-11-17 Thread Paul Cable
 

 -Original Message-
 From: Johnny SSH [mailto:[EMAIL PROTECTED] 
 Sent: Monday, November 17, 2008 10:18 AM
 To: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Newbee questions client/server + GUI
 
 
 Hmm well I guess then I will have to look at running Bacula 
 on a different system , like CentOS for the future but as my 
 server is the main server I don't want to distroy it by 
 upgrading to an unstable system or version.
 
 Where can I find bweb and brestore? I already mentioned 
 previously that I searched all over for them but couldn't 
 find them at all!!
 
 
 John Drescher-2 wrote:
  
  I am using standard Debian Etch version which I think is v1.38?
 
  I have bacula-web installed, so I can view the backups however I 
  tried looking for bat and within my apt repos it says it 
 needs v.2.4 
  even though the common version is much older?? Maybe a 
 Debian glitch.
 
  
  You can not run bat with that ancient 1.38 release.
  
  John
  
  
 --
  --- This SF.Net email is sponsored by the Moblin Your Move 
 Developer's 
  challenge Build the coolest Linux based applications with 
 Moblin SDK  
  win great prizes Grand prize is a trip for two to an Open 
 Source event 
  anywhere in the world 
  http://moblin-contest.org/redirect.php?banner_id=100url=/
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
  
  
 
 --
 View this message in context: 
 http://www.nabble.com/Newbee-questions-client-server-%2B-GUI-t
 p20529490p20541352.html
 Sent from the Bacula - Users mailing list archive at Nabble.com.
 
 
 --
 ---
 This SF.Net email is sponsored by the Moblin Your Move 
 Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  
 win great prizes
 Grand prize is a trip for two to an Open Source event 
 anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 



You shouldn't have to update your whole debian install to the unstable, I
think the Bacula issues were only related to certain lib's. If you could
track down specifically what is keeping 2.4.3 from running on Etch, you
might be able only update those few libs and get everything working. 

I don't know what bweb and brestore are. I'd suggest webmin though for
admin'ing your Bacula server (if you're command-line retarded like me).
There's some things it won't do from the gui, but the most important day to
day tasks are covered. Webmin can do pretty much everything BAT does as
well, eliminating the need for BAT. Not to mention all the other server
controls provided by webmin (dhcp,smtp,samba,etc.).

This is all my opinion though and I haven't been using Bacula very long, so
don't go quoting me ;)



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FW: Assioate BAT w/ its config permanently / Defaultdir.conf

2008-11-11 Thread Paul Cable
I was able to answer this one myself. The default director config can be
found in the tar at
 
/src/dird/bacula-dir.conf
 
Should have looked a bit harder before asking ;)


  _  

From: Paul Cable [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 10, 2008 1:28 PM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Assioate BAT w/ its config permanently /
Defaultdir.conf



Question 2. 
Where can I find a copy of the default bacula-dir.conf in the 2.4.3 tar
file? I've edited some of the entries for the default Catalog backup and the
default schedule it uses, I'd like to go back to default and start again
(maybe with some includes so I don't screw up the catalog runs). Maybe there
is a copy of the default director config somewhere on the internet? I can't
track it down in the extracted folder from the install tar package, but I
may just be looking in the wrong place.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Assioate BAT w/ its config permanently / Default dir.conf

2008-11-10 Thread Paul Cable
Here's my setup: 
Debian Lenny / Bacula 2.4.3 / BAT / Webmin
Couple 250GB SATA Drives Mirrored
No tapes, going to stick to HDD storage

Question 1.
When trying to start BAT from the command line I have to specify with
the -c switch a configuration file every time. Is it possible to get
this to be automatic when I type just BAT? It seems to work fine for
starting the daemons.

Question 2. 
Where can I find a copy of the default bacula-dir.conf in the 2.4.3 tar
file? I've edited some of the entries for the default Catalog backup and
the default schedule it uses, I'd like to go back to default and start
again (maybe with some includes so I don't screw up the catalog runs).
Maybe there is a copy of the default director config somewhere on the
internet? I can't track it down in the extracted folder from the install
tar package, but I may just be looking in the wrong place.

Question 3. Is it possible to save your backup to two places with the
same job? My files will be stored on the same server and partition as
everything else . The only other computer in my equation so far is the
client. While the drives in the server are mirrored, I'd also like to
store the files one more time on another windows server (shared
drived/networked storage). I know I can probably make it happen through
third party software or some type of cron job, but I'd like the
secondary storage location to use the same pruning rules as the primary
storage location.

Thanks for any help, sorry if I am asking dumb questions, I'm a windows
baby. I really love Bacula though, been using it for years and didn't
even know it (bought a revinetix appliance). With webmin its almost as
easy to setup and use as revinetix, even for a winblows user like
myself. I just can't get restores to work through webmin yet, but that's
no big deal considering how much easier the config files are to deal
with.

Sorry if I'm breaking any mailling list rules or common sense stuff.
Please correct me, I learn quick. And just as an FYI I did try to search
the archive a bit, but damn is it slow to respond.

Thanks,
Paul Cable


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] NFS Permission Problems

2008-02-15 Thread Paul Waldo
Hi all,

I know that this is not an issue directly related to bacula, but I'm hoping 
that some bacula user has already cracked it.

I'm running bacula version 1.38.11 on a dedicated backup host.  I have an 
appliance NAS box that has a number of CIFS shares and NFS exports.  Since 
the NAS box cannot run bacula, I mount each of the shares on the backup host, 
perform the backup from the mounted directories to the host, then unmount the 
shares.

This works fine for the CIFS, but I have just started to work with the NFS 
mounts and I'm running into permission problems.  I can mount, but the user 
that owns the files is an undefined user on the backup host.  Here is how I 
have it mounted on the host (from mount):

alexandria:/raid0/data/paul_home on /mounts/for_backup/paul_home type nfs 
(ro,noexec,nosuid,nodev,addr=10.0.0.100)

Here is the crux of the problem:
[EMAIL PROTECTED]:/mounts/for_backup# ls -ld /mounts/for_backup/paul_home
drwxrwx--x 93 1002 1002 8192 2008-02-15 08:31 /mounts/for_backup/paul_home

The user that owns the directory is 1002, which is not defined on the backup 
host.

Has anyone else solved this problem?  Any help would be greatly appreciated!  
Thanks in advance.

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [SOLVED] NFS Permission Problems

2008-02-15 Thread Paul Waldo
I think I answered my own question.  I set the NAS box to use no_root_squash 
and I was able to do a backup.  Sorry for the noise :-)

Paul

On Friday 15 February 2008 10:24:31 am Paul Waldo wrote:
 This works fine for the CIFS, but I have just started to work with the NFS
 mounts and I'm running into permission problems.  I can mount, but the user
 that owns the files is an undefined user on the backup host.  Here is how I
 have it mounted on the host (from mount):

 alexandria:/raid0/data/paul_home on /mounts/for_backup/paul_home type nfs
 (ro,noexec,nosuid,nodev,addr=10.0.0.100)

 Here is the crux of the problem:
 [EMAIL PROTECTED]:/mounts/for_backup# ls -ld /mounts/for_backup/paul_home
 drwxrwx--x 93 1002 1002 8192 2008-02-15 08:31 /mounts/for_backup/paul_home

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building 2.2.8 on CentOS 4.6

2008-01-29 Thread Paul Stewart
Built fine here on a CentOS 4.6 as client-only (if that helps)

Paul


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael
Nelson
Sent: Tuesday, January 29, 2008 3:47 PM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Building 2.2.8 on CentOS 4.6

When I try to rpmbuild the source rpm for 2.2.8 on CentOS 4.6 it bombs 
out immediately with:

libstdc++-devel = 3.4.63.4.6-89 is needed by bacula-2.2.8-1.i386

I can't find any such version of libstdc++-devel in any of the CentOS 
repositories.  The version that is installed on this machine is:

libstdc++-devel-3.4.6-9

...which matches the gcc:

gcc-c++-3.4.6-9.i386

Has anyone else run into this?

Thanks, Michael

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.516 / Virus Database: 269.19.15/1249 - Release Date: 1/29/2008
9:51 AM



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] multiple disks within pool?

2008-01-26 Thread Paul Dekkers
Hi,

We're experimenting with bacula as possible replacement for our netvault
setup. On the netvault server, we have multiple partitions that we can
use for file-based tapes. (This allows us to easily add arrays / space
in case we need more; something a striped setup would not allow for
instance.)

I can't seem to find out how we can use multiple storage devices with
bacula. The examples in the manual seem to assume that we want to have a
different storage device or pool for different clients, but basically I
just want to use the same pool for all clients, and use whatever storage
is available in a certain pool. (Using a next disk if the previous if
filled.)

I defined multiple devices in my bacula-sd.conf, like so:

Device {
  Name = dev1
  Media Type = File1
  Archive Device = /data/dev1
  LabelMedia = yes
  Random Access = Yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

Device {
  Name = dev2
  Media Type = File2
  Archive Device = /data/dev2
  LabelMedia = yes
  Random Access = Yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

And configured them in bacula-dir.conf:

Storage {
  Name = dev1
  Address = bacula.wind.surfnet.nl
  SDPort = 9103
  Password = ...
  Device = dev1
  Media Type = File1
  Maximum Concurrent Jobs = 5
}

Storage {
  Name = dev2
  Address = bacula.wind.surfnet.nl
  SDPort = 9103
  Password = ...
  Device = dev2
  Media Type = File2
  Maximum Concurrent Jobs = 5
}

I tried to put these together in a pool, something that apparently
doesn't work ;-)

Pool {
  Name = Default
  Pool Type = Backup
  Label Format = Vol
  AutoPrune = yes
  Volume Retention = 30d
  #Recycle Oldest Volume = yes
  #Purge Oldest Volume = yes
  Storage = dev1,dev2
  Maximum Volume Bytes = 10GB
  #Maximum Volumes = 4
}

Still experimenting with toggles there. (Recycle/Purge Oldest Volume
seems interesting in combination with a limited volume size, just to
allow whatever space there is for backups: just the more there is to
backup, the less history we get. I might want to have a different Pool
for Full vs. Incrementals though.)
But it doesn't use dev2 as soon as dev1 is filled for instance (or has 4
volumes, but I guess that might just count for all of the devices.)

Any suggestions on how to use multiple disk-devices for the same pool?

Paul


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Remote backup question

2008-01-15 Thread Paul Stewart
Hi folks...

Just started using Bacula recently - have it up and working on one machine -
like it so far..;)

I'm trying to get another machine backing up to my 'host' machine but
confused over what needs to be actually installed on the remote machine
(both host and remote are Linux).  The docs talk about moving one binary
over to the remote machine and a conf file and it should work... is this
correct?

Can someone provide a bit more detail on this?

Thanks very much,

Paul




PGP.sig
Description: PGP signature
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote backup question

2008-01-15 Thread Paul Stewart
Thanks... I moved the binary (moved the wrong binary last time - duh)... and
the conf file - updated the config and get this now:

[EMAIL PROTECTED] bacula]# bacula-fd
Floating point exception

This is on a Linux machine (host is Centos - this machine is RHEL4) - do I
need to compile on the RHEL4 machine to overcome this?  I don't mind, just
thinking if that's the solution or if something else might be wrong...

Take care,

Paul


-Original Message-
From: Dan Langille [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 15, 2008 4:19 PM
To: Paul Stewart
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Remote backup question

Paul Stewart wrote:
 Hi folks...
 
 Just started using Bacula recently - have it up and working on one machine
-
 like it so far..;)
 
 I'm trying to get another machine backing up to my 'host' machine but
 confused over what needs to be actually installed on the remote machine
 (both host and remote are Linux).  The docs talk about moving one binary
 over to the remote machine and a conf file and it should work... is this
 correct?

Yes.

 Can someone provide a bit more detail on this?

I'm the FreeBSD maintainer for Bacula.  FreeBSD provides a package/port 
called bacula-client.  It has everything you need.  I expect the 
packaging system for your OS does similar.

Of course, you can just build Bacula on that host


-- 
Dan Langille

BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
PGCon  - The PostgreSQL Conference: http://www.pgcon.org/



PGP.sig
Description: PGP signature
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote backup question

2008-01-15 Thread Paul Stewart
Thanks everyone for the QUICK responses..;)  I'll give that a try - makes
sense, just thought I'd ask

Paul


-Original Message-
From: Michael Galloway [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 15, 2008 4:36 PM
To: Paul Stewart
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Remote backup question

On Tue, Jan 15, 2008 at 04:08:21PM -0500, Paul Stewart wrote:
 Hi folks...
 
 Just started using Bacula recently - have it up and working on one machine
-
 like it so far..;)
 
 I'm trying to get another machine backing up to my 'host' machine but
 confused over what needs to be actually installed on the remote machine
 (both host and remote are Linux).  The docs talk about moving one binary
 over to the remote machine and a conf file and it should work... is this
 correct?
 
 Can someone provide a bit more detail on this?
 
 Thanks very much,
 
 Paul
 


i generally build bacula from source, and try and keep my clients at the
same
release as the server. the client builds easily with the
--enable-client-only 
flag.

try it and see.

-- michael 




PGP.sig
Description: PGP signature
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Media Types modified--how to notify bacula

2007-10-17 Thread Paul Waldo
Hi all,

After quite a bit of churning on my issue about Storage directives and
restores, I have found the answer (thanks Vik!)  I had all of the Media
Types for my many Storage directives set to File.  When I tried to
restore, bacula went to the first Storage directive with the appropriate
type, which was typically the wrong one.

So, each of my storage directives now has a unique Media Type, such as
Share42FileMediaType.  The problem now is that all of the previous
backups, while valid, are of the old Media Type (File).  Now my daily
backups fail (or at least require intervention) with the message Cannot
find any appendable volumes.  I assume this is because the old Media
Type has been forgotten, and recycling cannot take place.  Does this
seem reasonable?

If this is indeed the case, it seems to me what I need to do is modify
the database so that File media types are changed to the correct media
types.  Does this sound like a prudent thing to do?  Is there a way to
do this within bacula, or do I need to do some SQL magic?

Thanks in advance for any tips or pointers!

Paul




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bconsole from command line

2007-10-17 Thread Paul Heinlein
On Wed, 17 Oct 2007, Pascal Charest wrote:

 I am currently using bconsole for all my operations and would like 
 to know if there is anyway I could script command from outside the 
 bacula console.

Below my .sig is the script I use to print hard copies of the jobs 
found on tapes that go into secure storage. My thinking is that if 
everything but the tapes gets destroyed, I'd at least have an inkling 
of which tapes to use first.

-- 
Paul Heinlein  [EMAIL PROTECTED]  http://www.madboa.com/


#!/bin/bash
#
# Script for printing list of jobs on any given Bacula volume.
#
# ==

VOL=$1
TYPE=
PFX=/usr/local/bacula

case $VOL in
   '')
 cat __eom__ /dev/stderr

This script sends to the printer a list of jobs stored on any given
bacula tape volume. The printed output should be stored in the tape case
when a volume is moved to the safe or off-site.

Usage: $(basename $0) volume-name

Example: $(basename $0) 06L3

__eom__
 exit 1
 ;;
   *)
 # pools are named daily-tape, weekly-tape, and monthly-tape, so
 # the $TYPE of job is discerned by stripping '-tape' from the
 # end of the Pool.Name.
 SQL=
   select Pool.Name from Media, Pool
   where  Media.VolumeName = '$VOL'
   and  Media.PoolId = Pool.PoolId;
 TYPE=$(echo $SQL | mysql bacula | tail -n1 | sed s/-tape$//)
 ;;
esac

test -n $TYPE || exit 1

${PFX}/sbin/bconsole -c /etc/bacula/bconsole.conf \
__eof__ | a2ps --columns=1 --chars-per-line=132 
@#
@# *
@# Listing $TYPE jobs on volume $VOL
@# *
@#
@output /dev/null
@# the initial output and menu just clutter things up
query
14
@output
$VOL
quit
__eof__

###
### eof
###

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Media Types modified--how to notify bacula

2007-10-17 Thread Paul Waldo

Arno Lehmann wrote:

Hi,

17.10.2007 14:31,, Paul Waldo wrote::
  

Hi all,

After quite a bit of churning on my issue about Storage directives and
restores, I have found the answer (thanks Vik!)  I had all of the Media
Types for my many Storage directives set to File.  When I tried to
restore, bacula went to the first Storage directive with the appropriate
type, which was typically the wrong one.



Yes, that is a common result.

  

So, each of my storage directives now has a unique Media Type, such as
Share42FileMediaType.  The problem now is that all of the previous
backups, while valid, are of the old Media Type (File).  Now my daily
backups fail (or at least require intervention) with the message Cannot
find any appendable volumes.  I assume this is because the old Media
Type has been forgotten, and recycling cannot take place.  Does this
seem reasonable?



More or less... the old media type has not been forgotten, but there 
is no device left to use these volumes.


  

If this is indeed the case, it seems to me what I need to do is modify
the database so that File media types are changed to the correct media
types.  Does this sound like a prudent thing to do?



Definitely.

  

 Is there a way to
do this within bacula, or do I need to do some SQL magic?



I'd do this with bconsoles sqlquery command. You'll need to know your 
SQL commands, though...


Just create a full database dump before you work on it. Then, SQL 
commands like 'update Media set MediType=whatever where MediaId in 
(1, 2, 3);' should work.


You'll need the list of MediaIds for each set of volumes, of course, 
but these can be found easily.


Arno

  

Thanks for the reply, Arno.  I had a thought
My current Storage (with new Media Type) looks like this:
Storage {
   Name = ThecusSoftwareDevFile
   Address = pluto.waldo.home
   SDPort = 9103
   Password = xxx
   Device = ThecusSoftwareDevStorage
   Media Type = ThecusSoftwareDevFileMediaType
}

What if I add another Storage like this:
Storage {
   Name = ThecusSoftwareDevFileOld
   Address = pluto.waldo.home
   SDPort = 9103
   Password = xxx
   Device = ThecusSoftwareDevStorage
   Media Type = File
}

Would bacula then be able to find the previous types with the old Media 
Type of File?  My thought is that the new Storage directive could be 
kept around until all of the backups with Media Type File have been 
reclaimed.  Feel free to call me totally crazy, I just don't relish 
mucking with the DB if I don't have to...  Thanks!


Paul
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] label on tape lost

2007-10-09 Thread Paul Muster
Hello everybody,


for some reason - maybe I did something that caused it, but I don't
remember - one of my tapes has no label any more. Hence during restore
bacula does not recognize the tape:

07-Okt 23:59 server: RestoreFiles.2007-10-07_23.57.27 Error: block.c:263
Volume data error at 0:0! Wanted ID: BB02, got . Buffer discarded.
07-Okt 23:59 server: RestoreFiles.2007-10-07_23.57.27 Warning:
acquire.c:146 Requested Volume L20002 on LTO-2-Laufwerk (/dev/nst0) is
not a Bacula labeled Volume, because: ERR=block.c:263 Volume data error at
0:0! Wanted ID: BB02, got . Buffer discarded.

We discussed a similar problem in [1] but the last idea (relabeling option
in btape) was not commented any more.

How can I write a label to the tape without an EOF after the label? I know
the length of the label (see above), so following data would not be
destroyed, I think.

How can I make a dump of the tape to disk, so that I have more than one
try on this? I tried dd, but this needs a block device as source.


Thank you very much

Paul


[1]
http://sourceforge.net/mailarchive/message.php?msg_name=46DE0704.6050407%40umdnj.edu


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Reality check

2007-10-09 Thread Paul Heinlein
I recently migrated from an LTO-1 drive (in a PowerVault 122T) to an 
LTO-3 (PowerVault 124T).

Bacula (2.0.3) is locally configured to spool up to 50 GB to disk 
before writing to tape. Bacula reports that transfer rates have risen 
from about 22 Mbytes/second on the LTO-1 to about 43 Mbytes/second on 
the LTO-3.

Does anyone have logs lying around that might confirm or deny whether 
those numbers are in the range of reasonable?

-- 
Paul Heinlein  [EMAIL PROTECTED]  http://www.madboa.com/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reality check

2007-10-09 Thread Paul Heinlein
On Tue, 9 Oct 2007, John Drescher wrote:

 On 10/9/07, Paul Heinlein [EMAIL PROTECTED] wrote:
 I recently migrated from an LTO-1 drive (in a PowerVault 122T) to 
 an LTO-3 (PowerVault 124T).

 Bacula (2.0.3) is locally configured to spool up to 50 GB to disk 
 before writing to tape. Bacula reports that transfer rates have 
 risen from about 22 Mbytes/second on the LTO-1 to about 43 
 Mbytes/second on the LTO-3.

 Does anyone have logs lying around that might confirm or deny 
 whether those numbers are in the range of reasonable?

 Although these numbers are highly dependent on a lot of factors 
 (compression, filesystem performance, database, network speed) which 
 make comparing numbers not exactly perfect however your numbers look 
 pretty good to me.

I understand the caveats about equating one network's numbers with 
another's -- but thanks for letting me know mine are at least within 
a standard deviation of reasonable. :-)

-- 
Paul Heinlein  [EMAIL PROTECTED]  http://www.madboa.com/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage Resource for Restore Job cannot be specified?

2007-10-08 Thread Paul Waldo
Hi all,

I am trying to create a restore job definition:
Job {
Name = ThecusCameraRestoreFiles
#JobDefs = RestoreFilesTemplate
Storage = ThecusCameraFile
Type = Restore
Client=pluto
FileSet=Full Set
Pool = Default
Messages = Standard
Where = /tmp/bacula-restores
}

When I run this job to restore files, the first thing I notice is that
the Job Summary (presented before run) shows that it will use Storage
File.  Why would it not use ThecusCameraFile, as specified in the
job definition?

Needing the files restored, I modify the job run so that the Storage is
correct, like this:
Run Restore job
JobName:ThecusCameraRestoreFiles
Bootstrap:  /var/lib/bacula/pluto-dir.7.restore.bsr
Where:  /tmp/bacula-restores
Replace:always
FileSet:Full Set
Client: thecus_camera
Storage:ThecusCameraFile
When:   2007-10-08 07:17:10
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): y
Job started. JobId=692

I quickly get a message informing me that the Storage is a problem:
You have messages.
*mess
08-Oct 07:18 pluto-dir: Start Restore Job
ThecusCameraRestoreFiles.2007-10-08_07.18.57
08-Oct 07:18 pluto-sd: Failed command:
08-Oct 07:18 pluto-sd: ThecusCameraRestoreFiles.2007-10-08_07.18.57
Fatal error:
 Device FileStorage with MediaType File requested by DIR not
found in SD Device resources.
08-Oct 07:18 pluto-dir: ThecusCameraRestoreFiles.2007-10-08_07.18.57
Fatal error:
 Storage daemon didn't accept Device FileStorage because:
 3924 Device FileStorage not in SD Device resources.
08-Oct 07:18 pluto-dir: ThecusCameraRestoreFiles.2007-10-08_07.18.57
Error: Bacula 1.38.11 (28Jun06): 08-Oct-2007 07:18:59
  JobId:  692
  Job:ThecusCameraRestoreFiles.2007-10-08_07.18.57
  Client: thecus_camera
  Start time: 08-Oct-2007 07:18:59
  End time:   08-Oct-2007 07:18:59
  Files Expected: 53
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:
  SD termination status:
  Termination:*** Restore Error ***

08-Oct 07:18 pluto-dir: ThecusCameraRestoreFiles.2007-10-08_07.18.57
Error: Bacula 1.38.11 (28Jun06): 08-Oct-2007 07:18:59
  JobId:  692
  Job:ThecusCameraRestoreFiles.2007-10-08_07.18.57
  Client: thecus_camera
  Start time: 08-Oct-2007 07:18:59
  End time:   08-Oct-2007 07:18:59
  Files Expected: 53
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  1
  FD termination status:
  SD termination status:
  Termination:*** Restore Error ***


Any help would be greatly appreciated.  Thank in advance!

Paul


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >