Re: [Bacula-users] Some unanswered questions

2013-08-20 Thread Radosław Korzeniewski
Hello,

2013/8/20 debuggercz bacula-fo...@backupcentral.com

 Hello,

 I decided deploy Bacula on servers in the school. A have some questions if
 you have one minute.

 1.) I heard that Bacula have problem with scalability. It can use any DB
 but it cannot use multiple instances. Do you know the solution of this?


How large will be your deployment? More then 2000 clients and more then
100M files? If no forget about it. You can handle it easily with a single
Director and single Catalog database.

2.) How effectively link verify job to a normal backup? When the backup is
 performed we want to verify that the backup is all right...


What do you mean?
The only solution to verify any kind of write is a read. If you want to
verify that your backup is correct is you should perform a restore.


 3.) I read on the internet about load-balancing (multiple Directory
 daemons). Is it possible automatically associate clients to the Directory
 daemon? I only know about manual associated clients into groups where one
 group is managed by one Directory daemon.


How many backup clients will you plan to deploy?

I bet you will have a performance problem with server network throughput
bottleneck first then Director. A single Bacula Client with a single Job
can saturate 1Gbit/s Ethernet without a problem (if you know what hardware
you require and how to setup it correctly). Why do you think a Bacula
Director cannot handle 1 client with one job running? :) In this case you
should separate Storage Daemon server from Director then install next
Storage Daemon, then another SD and another...

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Pruning Not Working

2013-08-20 Thread Rickcinfl
Hi. I have everything set to 30 Days and I still can't get to purge. Here is 
what it says after a backup 

20-Aug 07:39 bacula-dir JobId 726: Begin pruning Jobs older than 37 years 8 
months 2 days 11 hours 39 mins 24 secs. 
20-Aug 07:39 bacula-dir JobId 726: No Jobs found to prune. 
20-Aug 07:39 bacula-dir JobId 726: Begin pruning Jobs. 
20-Aug 07:39 bacula-dir JobId 726: No Files found to prune. 
20-Aug 07:39 bacula-dir JobId 726: End auto prune. 

What am I missing? My Hard Drive is filling up fast. 

Thanks, 
Rick

+--
|This was sent by eastpasco...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pruning Not Working

2013-08-20 Thread John Drescher
 20-Aug 07:39 bacula-dir JobId 726: Begin pruning Jobs older than 37 years
 8 months 2 days 11 hours 39 mins 24 secs.
 20-Aug 07:39 bacula-dir JobId 726: No Jobs found to prune.
 20-Aug 07:39 bacula-dir JobId 726: Begin pruning Jobs.
 20-Aug 07:39 bacula-dir JobId 726: No Files found to prune.
 20-Aug 07:39 bacula-dir JobId 726: End auto prune.

 What am I missing? My Hard Drive is filling up fast.



Did you set limits on your pool. I mean things lilke use duration, maximum
volume size, maximum number of volumes? If not bacula will fill the disk
then your retention period begins.

John
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LIP reset occurred (0), device removed.

2013-08-20 Thread Iban Cabrillo
Hi,
  this is the qlogic driver version:

  [1.348682] qla2xxx [:07:00.0]-00fa:1: QLogic Fibre Channed HBA
Driver: 8.03.07.12-k.
[1.348686] qla2xxx [:07:00.0]-00fb:1: QLogic QLE2562 - QLogic 8Gb
FC Dual-port HBA for System x.
[1.348699] qla2xxx [:07:00.0]-00fc:1: ISP2532: PCIe (2.5GT/s x8) @
:07:00.0 hdma+ host#=1 fw=5.06.05 (90d5).
[1.348768] qla2xxx [:07:00.1]-001d: : Found an ISP2532 irq 19
iobase 0xc9c68000.
[1.348949] qla2xxx :07:00.1: irq 44 for MSI/MSI-X
[1.348956] qla2xxx :07:00.1: irq 45 for MSI/MSI-X
[1.349023] qla2xxx [:07:00.1]-0040:2: Configuring PCI space...
[1.349028] qla2xxx :07:00.1: setting latency timer to 64
[1.361424] qla2xxx [:07:00.1]-0061:2: Configure NVRAM parameters...
[1.369088] qla2xxx [:07:00.1]-0078:2: Verifying loaded RISC code...
[1.369131] qla2xxx [:07:00.1]-0092:2: Loading via request-firmware.
[1.400994] qla2xxx [:07:00.1]-00c0:2: Allocate (64 KB) for FCE...
[1.401064] qla2xxx [:07:00.1]-00c3:2: Allocated (64 KB) EFT ...
[1.401150] qla2xxx [:07:00.1]-00c5:2: Allocated (1350 KB) for
firmware dump.
[1.405503] scsi2 : qla2xxx
[1.405763] qla2xxx [:07:00.1]-00fa:2: QLogic Fibre Channed HBA
Driver: 8.03.07.12-k.
[1.405767] qla2xxx [:07:00.1]-00fb:2: QLogic QLE2562 - QLogic 8Gb
FC Dual-port HBA for System x.
[1.405779] qla2xxx [:07:00.1]-00fc:2: ISP2532: PCIe (2.5GT/s x8) @
:07:00.1 hdma+ host#=2 fw=5.06.05 (90d5).
..
[2.928979] scsi 1:0:0:0: Sequential-Access IBM  ULT3580-TD3
93GP PQ: 0 ANSI: 3
[2.935530] st: Version 20101219, fixed bufsize 32768, s/g segs 256
[2.935861] st 1:0:0:0: Attached scsi tape st0
[2.935864] st 1:0:0:0: st0: try direct i/o: yes (alignment 4 B)
[3.939166] qla2xxx [:07:00.1]-500a:2: LOOP UP detected (8 Gbps).
[3.997944] scsi 2:0:0:0: Sequential-Access IBM  ULT3580-TD5
C7RC PQ: 0 ANSI: 6
[4.000531] scsi 2:0:0:1: Medium ChangerIBM  03584L32
B570 PQ: 0 ANSI: 6
[4.012583] st 2:0:0:0: Attached scsi tape st1
[4.012586] st 2:0:0:0: st1: try direct i/o: yes (alignment 4 B)
[4.013947] osst :I: Tape driver with OnStream support version 0.99.4
[4.013948] osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
[4.014691] SCSI Media Changer driver v0.25
[4.016286] ch0: type #1 (mt): 0x1+2 [medium transport]
[4.016289] ch0: type #2 (st): 0x401+1601 [storage]
[4.016291] ch0: type #3 (ie): 0x301+10 [import/export]
[4.016293] ch0: type #4 (dt): 0x101+2 [data transfer]


[1:0:0:0]tapeIBM  ULT3580-TD3  93GP  /dev/st0
[2:0:0:0]tapeIBM  ULT3580-TD5  C7RC  /dev/st1
[2:0:0:1]mediumx IBM  03584L32 B570  /dev/sch0

Some times the reset hapends whwn a few MBs are written  others after a
couple of hundred MBs but only when we use the LTO5 ([2:0:0:0]tape
IBM  ULT3580-TD5  C7RC  /dev/st1 ) the LT03 always works grate.

the linux kernel running is :
Linux bacula 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux

and the bacula-sd configuration for LT05 tape:
Device {
  # The TSM3100's second t3pe drive
  Name = ULT3580-TD5
  Archive Device = /dev/nst1
  Device Type = Tape
  Media Type = LTO-5
  Autochanger = Yes
  # Changer Device = inherited from Changer
  Alert Command = sh -c '/usr/sbin/tapeinfo -f /dev/sg1 | /bin/sed -n
/TapeAlert/p
  Drive Index = 0
  RemovableMedia = yes
  Random Access = no
  Maximum Block Size = 262144
  Maximum Network Buffer Size = 262144
  Maximum Spool Size = 20gb
  Maximum Job Spool Size = 10gb
  Spool Directory = /backup/spool
  AutomaticMount = Yes;
}

root@bacula:~# /usr/sbin/tapeinfo -f /dev/sg1
Product Type: Tape Drive
Vendor ID: 'IBM '
Product ID: 'ULT3580-TD5 '
Revision: 'C7RC'
Attached Changer API: No
SerialNumber: '00078ABB4C'
MinBlock: 1
MaxBlock: 8388608
SCSI ID: 0
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x58
Density Code: 0x58
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 582
Partition 0 Remaining Kbytes: -1
Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 1

I appreciate any help. thanks in advance I.


2013/8/19 Radosław Korzeniewski rados...@korzeniewski.net

 Hello,

 2013/8/19 Iban Cabrillo cabri...@ifca.unican.es

 Hi, Any Idea?
   Seems something related to bacula, I have tried chaging the FC port,
 but  the result has been the same


 FC LIP reset is not related to Bacula - IMHO. I had the same problem but
 with a changer device. The problem was a second server which accidentally
 accessed the same target (at random time). When I switch off the second
 server the problem was cured. Sure, your problem could has a different
 origin then my, and YMMV. BTW, what kernel tape driver do you use (Linux
 stock)?

 best regards
 --
 Radosław Korzeniewski
 rados...@korzeniewski.net




-- 

Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Dimitri Maziuk
On 08/19/2013 12:41 PM, Jonathan Bayer wrote:
 Hi,
 
 We have a few users who, for various reasons, constantly create  delete 
 huge files (hundreds of gigs).  I'd like to exclude these from the 
 backup process.
 
 How can I do that, since I don't know where they can appear or what 
 their names are?

You can't.

I've a similarly unknown set of files to back up and the best I could
come up with is run find on the client looking for files not modified in
the last 24 hours. And even then bacula isn't saving the metadata
(ownership, permissions, timestamps) on parent directories (apparently
it's a feature).

So if you know these files don't stick around for more than X hours, you
can run a script on the client that gives you a list of files older than
X hours  you back up those. Obviously, that gives you a X-hour window
when real files aren't backed up plus a change for false negatives 
false positives. Plus bacula's dynamic fileset feature.

We also have a rule: anything with _unb_ in its name isn't backed up
-- but I can educate my users...

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Dimitri Maziuk
On 08/20/2013 02:02 PM, Dimitri Maziuk wrote:
...
 when real files aren't backed up plus a change for false negatives 

*chance*
duh
-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LIP reset occurred (0), device removed.

2013-08-20 Thread Clark, Patricia A.
Possibly the fiber cable?  Can you swap it out and try again?
I had 20 fiber connections and one of them needed replacing.  It made the drive 
look bad from the server, but the library did not report any error conditions 
with the drive.

Patti Clark
Linux System Administrator
Research and Development Systems Support Oak Ridge National Laboratory

From: Iban Cabrillo cabri...@ifca.unican.esmailto:cabri...@ifca.unican.es
Date: Tuesday, August 20, 2013 12:45 PM
To: Radosław Korzeniewski 
rados...@korzeniewski.netmailto:rados...@korzeniewski.net
Cc: Bacula Users 
Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] LIP reset occurred (0), device removed.

Hi,
  this is the qlogic driver version:

  [1.348682] qla2xxx [:07:00.0]-00fa:1: QLogic Fibre Channed HBA 
Driver: 8.03.07.12-k.
[1.348686] qla2xxx [:07:00.0]-00fb:1: QLogic QLE2562 - QLogic 8Gb FC 
Dual-port HBA for System x.
[1.348699] qla2xxx [:07:00.0]-00fc:1: ISP2532: PCIe (2.5GT/s x8) @ 
:07:00.0 hdma+ host#=1 fw=5.06.05 (90d5).
[1.348768] qla2xxx [:07:00.1]-001d: : Found an ISP2532 irq 19 iobase 
0xc9c68000.
[1.348949] qla2xxx :07:00.1: irq 44 for MSI/MSI-X
[1.348956] qla2xxx :07:00.1: irq 45 for MSI/MSI-X
[1.349023] qla2xxx [:07:00.1]-0040:2: Configuring PCI space...
[1.349028] qla2xxx :07:00.1: setting latency timer to 64
[1.361424] qla2xxx [:07:00.1]-0061:2: Configure NVRAM parameters...
[1.369088] qla2xxx [:07:00.1]-0078:2: Verifying loaded RISC code...
[1.369131] qla2xxx [:07:00.1]-0092:2: Loading via request-firmware.
[1.400994] qla2xxx [:07:00.1]-00c0:2: Allocate (64 KB) for FCE...
[1.401064] qla2xxx [:07:00.1]-00c3:2: Allocated (64 KB) EFT ...
[1.401150] qla2xxx [:07:00.1]-00c5:2: Allocated (1350 KB) for firmware 
dump.
[1.405503] scsi2 : qla2xxx
[1.405763] qla2xxx [:07:00.1]-00fa:2: QLogic Fibre Channed HBA Driver: 
8.03.07.12-k.
[1.405767] qla2xxx [:07:00.1]-00fb:2: QLogic QLE2562 - QLogic 8Gb FC 
Dual-port HBA for System x.
[1.405779] qla2xxx [:07:00.1]-00fc:2: ISP2532: PCIe (2.5GT/s x8) @ 
:07:00.1 hdma+ host#=2 fw=5.06.05 (90d5).
..
[2.928979] scsi 1:0:0:0: Sequential-Access IBM  ULT3580-TD3  93GP 
PQ: 0 ANSI: 3
[2.935530] st: Version 20101219, fixed bufsize 32768, s/g segs 256
[2.935861] st 1:0:0:0: Attached scsi tape st0
[2.935864] st 1:0:0:0: st0: try direct i/o: yes (alignment 4 B)
[3.939166] qla2xxx [:07:00.1]-500a:2: LOOP UP detected (8 Gbps).
[3.997944] scsi 2:0:0:0: Sequential-Access IBM  ULT3580-TD5  C7RC 
PQ: 0 ANSI: 6
[4.000531] scsi 2:0:0:1: Medium ChangerIBM  03584L32 B570 
PQ: 0 ANSI: 6
[4.012583] st 2:0:0:0: Attached scsi tape st1
[4.012586] st 2:0:0:0: st1: try direct i/o: yes (alignment 4 B)
[4.013947] osst :I: Tape driver with OnStream support version 0.99.4
[4.013948] osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
[4.014691] SCSI Media Changer driver v0.25
[4.016286] ch0: type #1 (mt): 0x1+2 [medium transport]
[4.016289] ch0: type #2 (st): 0x401+1601 [storage]
[4.016291] ch0: type #3 (ie): 0x301+10 [import/export]
[4.016293] ch0: type #4 (dt): 0x101+2 [data transfer]


[1:0:0:0]tapeIBM  ULT3580-TD3  93GP  /dev/st0
[2:0:0:0]tapeIBM  ULT3580-TD5  C7RC  /dev/st1
[2:0:0:1]mediumx IBM  03584L32 B570  /dev/sch0

Some times the reset hapends whwn a few MBs are written  others after a couple 
of hundred MBs but only when we use the LTO5 ([2:0:0:0]tapeIBM  
ULT3580-TD5  C7RC  /dev/st1 ) the LT03 always works grate.

the linux kernel running is :
Linux bacula 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux

and the bacula-sd configuration for LT05 tape:
Device {
  # The TSM3100's second t3pe drive
  Name = ULT3580-TD5
  Archive Device = /dev/nst1
  Device Type = Tape
  Media Type = LTO-5
  Autochanger = Yes
  # Changer Device = inherited from Changer
  Alert Command = sh -c '/usr/sbin/tapeinfo -f /dev/sg1 | /bin/sed -n 
/TapeAlert/p
  Drive Index = 0
  RemovableMedia = yes
  Random Access = no
  Maximum Block Size = 262144
  Maximum Network Buffer Size = 262144
  Maximum Spool Size = 20gb
  Maximum Job Spool Size = 10gb
  Spool Directory = /backup/spool
  AutomaticMount = Yes;
}

root@bacula:~# /usr/sbin/tapeinfo -f /dev/sg1
Product Type: Tape Drive
Vendor ID: 'IBM '
Product ID: 'ULT3580-TD5 '
Revision: 'C7RC'
Attached Changer API: No
SerialNumber: '00078ABB4C'
MinBlock: 1
MaxBlock: 8388608
SCSI ID: 0
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x58
Density Code: 0x58
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 582
Partition 0 Remaining Kbytes: -1
Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0

Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Phil Stracchino
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/20/13 15:02, Dimitri Maziuk wrote:
 On 08/19/2013 12:41 PM, Jonathan Bayer wrote:
 Hi,
 
 We have a few users who, for various reasons, constantly create 
 delete huge files (hundreds of gigs).  I'd like to exclude these
 from the backup process.
 
 How can I do that, since I don't know where they can appear or
 what their names are?
 
 You can't.

Actually, ues, you can.  Instead of using a static Fileset, you can
configure Bacula to source a script that generates it on the fly.  You
could create a dynamic-fileset script that excludes all files over a
specified size.

Here's an example from
http://www.bacula.org/manuals/en/install/install/Configuring_Director.html
of a dynamically-generated Fileset:

Include {
   Options {
  signature = SHA1
   }
   File = |sh -c 'df -l | grep \^/dev/hd[ab]\ | grep -v \.*/tmp\ \
  | awk \{print \\$6}\'
}

So you could simply create a dynamic fileset script here that
enumerates all directories and all files not exceeding a specified size.

Now, the above is a bit of a brute-force solution.  I have not
personally tried this refinement, but I see no reason it should not
ALSO be possible to create a static fileset with a dynamically
generated exclude list, something like this.

FileSet {
  Name = Dynamic Exclude Set
  Include {
 Options {
signature = SHA1
File  = |sh -c 'find /home -size +10G'
Exclude   = yes
 }
 File = /
 File = /home
 File = /var
  }
}

This example should result in automatically excluding any file 10GB or
larger located anywhere under /home.


- -- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEAREIAAYFAlITy6cACgkQ0DfOju+hMknfvgCaAq9OAedfNULleI25KMyuf/WE
VEgAoKXy0MSGDA24NUWMKzvCLY3TlOdN
=7XM+
-END PGP SIGNATURE-

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Dimitri Maziuk
On 08/20/2013 03:03 PM, Phil Stracchino wrote:
 On 08/20/13 15:02, Dimitri Maziuk wrote:
 On 08/19/2013 12:41 PM, Jonathan Bayer wrote:
 Hi,

 We have a few users who, for various reasons, constantly create 
 delete huge files (hundreds of gigs).  I'd like to exclude these
 from the backup process.

 How can I do that, since I don't know where they can appear or
 what their names are?
 
 You can't.
 
 Actually, ues, you can.  

Did you read the rest of my e-mail?

 Instead of using a static Fileset, you can
 configure Bacula to source a script that generates it on the fly. 

That is exactly what I said. Also that it's error-prone and that bacula
won't store attributes on the parent directories unless you explicitly
include them. (But then you have to have exclude everything in them and
include individual files, which makes the whole mess even uglier and
more error-prone.)

See also replies to my Missing directory metadata and weird directory
timestamps in this month's archive.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Phil Stracchino
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/20/13 16:43, Dimitri Maziuk wrote:
 On 08/20/2013 03:03 PM, Phil Stracchino wrote:
 On 08/20/13 15:02, Dimitri Maziuk wrote:
 On 08/19/2013 12:41 PM, Jonathan Bayer wrote:
 Hi,
 
 We have a few users who, for various reasons, constantly
 create  delete huge files (hundreds of gigs).  I'd like to
 exclude these from the backup process.
 
 How can I do that, since I don't know where they can appear
 or what their names are?
 
 You can't.
 
 Actually, ues, you can.
 
 Did you read the rest of my e-mail?

Yes, I did.  I'm not quite clear here whether you're arguing that I'm
contradicting you, or arguing that you contradicted your own statement
that You can't.

 Instead of using a static Fileset, you can configure Bacula to
 source a script that generates it on the fly.
 
 That is exactly what I said. Also that it's error-prone and that
 bacula won't store attributes on the parent directories unless you
 explicitly include them. (But then you have to have exclude
 everything in them and include individual files, which makes the
 whole mess even uglier and more error-prone.)

Dmitri, if you look at the approach I proposed, it would back up the
entire tree and only exclude specific files, which entirely sidesteps
the parent-directory-metadata problem you mentioned.  Excluding only
the files you don't want is a simpler solution altogether.  Are you
arguing that it won't work?  Your point is unclear.


- -- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEAREIAAYFAlIT5FwACgkQ0DfOju+hMknMfACg/C/E+lWgMGX43dpFaXixOG9+
jwcAoM04eFsosd32kUbk7mq9BJB5OF1O
=Skuj
-END PGP SIGNATURE-

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Dimitri Maziuk
On 08/20/2013 04:49 PM, Phil Stracchino wrote:

 Dmitri, if you look at the approach I proposed, it would back up the
 entire tree and only exclude specific files, which entirely sidesteps
 the parent-directory-metadata problem you mentioned.  Excluding only
 the files you don't want is a simpler solution altogether.  Are you
 arguing that it won't work?  Your point is unclear.

My points are

1. Excluding files based on size alone sounds icky. You need to consider
your false positives  negatives carefully.

2. OP didn't say what his fileset normally looks like. Without that
- if he's backing up everything and excluding specific files, then
another exclude should probably work.
- If he's excluding everything and backing up specific files, then
dynamic fileset comes with the parent metadata problem.

3. Most importantly, the only way to do it right is if OP can get his
users to cooperate. Then (e.g.) 'exclude { wild = *__nobak__* }' is
all there's to it.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-20 Thread Phil Stracchino
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/20/13 18:17, Dimitri Maziuk wrote:
 On 08/20/2013 04:49 PM, Phil Stracchino wrote:
 
 Dmitri, if you look at the approach I proposed, it would back up
 the entire tree and only exclude specific files, which entirely
 sidesteps the parent-directory-metadata problem you mentioned.
 Excluding only the files you don't want is a simpler solution
 altogether.  Are you arguing that it won't work?  Your point is
 unclear.
 
 My points are
 
 1. Excluding files based on size alone sounds icky. You need to
 consider your false positives  negatives carefully.

That *was* the original poster's stipulated condition, though.


- -- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEAREIAAYFAlIUOlUACgkQ0DfOju+hMkmaSwCeKjp22e/qGgH0vpl1zM3QHL3P
sYoAn14AyTJBQyYGhUEKtZge8LIyLnsA
=cNxq
-END PGP SIGNATURE-

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users