Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Christian Manal
  - File daemon is single threaded so is limiting backup performance. Is there 
 was a way to start more than one stream at the same time for a single machine 
 backup? Right now I have all the file systems for a single client in the same 
 file set.
 
  - Tied in with above, accurate backups cut into performance even more when 
 doing all the md5/sha1 calcs. Spliting this perhaps with above to multiple 
 threads would really help.
 
  - How to stream a single job to multiple tape drives. Couldn't figure this 
 out so that only one tape drive is being used.
 
  - spooling to disk first then to tape is a killer. if multiple streams could 
 happen at once this may mitigate this or some type of continous spooling. How 
 do others do this?


Hi,

I haven't tried, but shouldn't it be possible to run multiple instances
of FDs on different ports? You could split up the fileset into multiple
jobs which then can run concurrently on multiple FDs.


Regards,
Christian Manal

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula FD not talking to SD

2011-06-28 Thread John Malone
Hi,
 
output of client status at the same time as the log below:
bristol-1622a6f-fd Version: 5.0.3 (04 August 2010)  VSS Linux Cross-compile 
Win32
Daemon started 28-Jun-11 08:21. Jobs: run=0 running=0.
 Heap: heap=0 smbytes=17,535 max_bytes=17,763 bufs=77 max_bufs=81
 Sizeof: boffset_t=8 size_t=4 debug=0 trace=1
 
Running Jobs:
JobId 19 Job bristol-1622a6f-fd.2011-06-28_10.09.35_06 is running.
Full Backup Job started: 28-Jun-11 10:08
Files=0 Bytes=0 Bytes/sec=0 Errors=0
Files Examined=0
SDReadSeqNo=5 fd=516
 
End of log file showing failure at the same tie as the status page above:
 
24-Jun 11:37 bacula01-dir JobId 37: No prior Full backup Job record found.
24-Jun 11:37 bacula01-dir JobId 37: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
24-Jun 11:37 bacula01-dir JobId 37: Start Backup JobId 37, 
Job=bristol-1622a6f-fd.2011-06-24_11.37.50_12
24-Jun 11:37 bacula01-dir JobId 37: Using Device FileStorage
24-Jun 11:45 bacula01-dir JobId 37: Fatal error: Socket error on Storage 
command: ERR=Interrupted system call
24-Jun 11:45 bacula01-dir JobId 37: Fatal error: Network error with FD during 
Backup: ERR=Interrupted system call
24-Jun 11:45 bacula01-dir JobId 37: Fatal error: No Job status returned from FD.
24-Jun 11:45 bacula01-dir JobId 37: Bacula bacula01-dir 5.0.2 (28Apr10): 
24-Jun-2011 11:45:53
  Build OS:   i486-pc-linux-gnu debian squeeze/sid
  JobId:  37
  Job:bristol-1622a6f-fd.2011-06-24_11.37.50_12
  Backup Level:   Full (upgraded from Incremental)
  Client: bristol-1622a6f-fd 5.0.3 (04Aug10) 
Linux,Cross-compile,Win32
  FileSet:WinTest 2011-06-23 12:46:28
  Pool:   File (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:File (From Job resource)
  Scheduled time: 24-Jun-2011 11:37:47
  Start time: 24-Jun-2011 11:37:52
  End time:   24-Jun-2011 11:45:53
  Elapsed time:   8 mins 1 sec
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): 
  Volume Session Id:  3
  Volume Session Time:1308908426
  Last Volume Bytes:  49,455,379 (49.45 MB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Waiting on FD
  Termination:Backup Canceled
All the daemons seem to be running ok, it's just that the FD won't talk to the 
SD (I think). Any suggestions?
 
I also get Device is BLOCKED waiting to create a volume are the 2 related or 
are they completely separate problems?
 
Thanks
 
John

__
'Do it online' with our growing range of online services - 
http://www.bristol.gov.uk/services

Sign-up for our email bulletin giving news, have-your-say and event information 
at: http://www.bristol.gov.uk/newsdirect 

View webcasts of Council meetings at http://www.bristol.gov.uk/webcast

Bristol is the UK's first Cycling City. Visit www.betterbybike.info to join 
thousands of others getting around by bike.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Firewall traversal

2011-06-28 Thread Martin Simmons
 On Mon, 27 Jun 2011 20:18:46 -0400, Dan Langille said:
 
 One of your basic assumptions is incorrect.  I don't know what it is, but
 something, somewhere is wrong.
 
 Verify that your  bacula-dir.conf configuration is correct.

I'd add:

Verify that your bacula-dir.conf configuration is actually being used.
E.g. change the storage address to something that doesn't resolve, reload the
config and check that it tries to use that address.

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bextract not returning data

2011-06-28 Thread Martin Simmons
 On Mon, 27 Jun 2011 14:07:35 -0500, Troy Kocher said:
 
 All, 
 
 I could really use some ideas..
 
 #bextract -d99 -i databaselist.rest -V DatabaseF-0027 /data/bacula /data/tmp
 bextract: stored_conf.c:698-0 Inserting director res: foobar-dir
 bextract: butil.c:282 Using device: /data/bacula for reading.
 bextract: acquire.c:109-0 MediaType dcr= dev=File
 bextract: acquire.c:228-0 opened dev pool (/data/bacula) OK
 bextract: acquire.c:231-0 calling read-vol-label
 
 Volume Label:
 Id: Bacula 1.0 immortal
 VerNo : 11
 VolName   : DatabaseF-0027
 PrevVolName   : 
 VolFile   : 0
 LabelType : VOL_LABEL
 LabelSize : 186
 PoolName  : DatabaseF
 MediaType : File
 PoolType  : Backup
 HostName  : foobar.mtadistributors.com
 Date label written: 06-Sep-2010 02:23
 bextract: acquire.c:235-0 Got correct volume.
 24-Jun 11:16 bextract JobId 0: Ready to read from volume DatabaseF-0027 on 
 device pool (/data/bacula).
 bextract: attr.c:281-0 -rw-rw   1 pgsqlpgsql  386091824 
 2011-06-09 08:51:47  
 /data/tmp/mnt/database/usr/home/pgsql/dumps/mta.dump.sql.gz-4
 bextract JobId 0: -rw-rw   1 pgsqlpgsql  386091824 2011-06-09 
 08:51:47  /data/tmp/mnt/database/usr/home/pgsql/dumps/mta.dump.sql.gz-4
 --
 I started this on Friday and this morning a zero byte file existed @ 
 '/data/tmp/mnt/database/usr/home/pgsql/dumps/mta.dump.sql.gz-4'.  Top showed 
 the one of the processors working on 'bextract' and consuming 100%.   
 
 Any help would be greatly appreciated!

Which version of Bacula?

Can you try attaching gdb to the bextract process and doing:

thread apply all bt

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Find out the error

2011-06-28 Thread Valerio Pachera
Hi all, bacula log show there has been an error ( Non-fatal FD errors).
No clue of what happened.
Where can I find more informations about it?

27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete):
System Writer, State: 0x1 (VSS_WS_STABLE)
27-feb 01:53 mainbkp-sd JobId 282: Job write elapsed time = 01:46:57,
Transfer rate = 3.109 M Bytes/second
27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete):
MSDEWriter, State: 0x1 (VSS_WS_STABLE)
27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete):
Registry Writer, State: 0x1 (VSS_WS_STABLE)
27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete): Event
Log Writer, State: 0x1 (VSS_WS_STABLE)
27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete): COM+
REGDB Writer, State: 0x1 (VSS_WS_STABLE)
27-feb 01:52 fox2003-fd JobId 282: VSS Writer (BackupComplete): WMI
Writer, State: 0x1 (VSS_WS_STABLE)
27-feb 01:53 control-station-director JobId 282: Bacula
control-station-director 5.0.2 (28Apr10): 27-feb-2011 01:53:34
  Build OS:   x86_64-pc-linux-gnu debian 5.0.5
  JobId:  282
  Job:bkp-FOX-2003.2011-02-27_00.05.00_12
  Backup Level:   Full
  Client: fox-fd 5.0.3 (04Aug10) Linux,Cross-compile,Win32
  FileSet:fox-2003-fileset 2011-02-16 00:05:00
  Pool:   main-pool (From Job resource)
  Catalog:FloverCatalog (From Client resource)
  Storage:mainbkp-sd (From Job resource)
  Scheduled time: 27-feb-2011 00:05:00
  Start time: 27-feb-2011 00:06:17
  End time:   27-feb-2011 01:53:34
  Elapsed time:   1 hour 47 mins 17 secs
  Priority:   10
  FD Files Written:   270,268
  SD Files Written:   270,268
  FD Bytes Written:   19,906,625,347 (19.90 GB)
  SD Bytes Written:   19,952,637,653 (19.95 GB)
  Rate:   3092.5 KB/s
  Software Compression:   63.7 %
  VSS:yes
  Encryption: no
  Accurate:   no
  Volume name(s): main-0129
  Volume Session Id:  27
  Volume Session Time:1298278661
  Last Volume Bytes:  19,982,722,170 (19.98 GB)
  Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK -- with warnings


27-feb 01:53 control-station-director JobId 282: Begin pruning Jobs
older than 21 days .
27-feb 01:53 control-station-director JobId 282: No Jobs found to prune.
27-feb 01:53 control-station-director JobId 282: Begin pruning Jobs.
27-feb 01:53 control-station-director JobId 282: No Files found to prune.
27-feb 01:53 control-station-director JobId 282: End auto prune.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Josh Fisher

On 6/27/2011 8:43 PM, Steve Costaras wrote:


 

 - How to stream a single job to multiple tape drives.   Couldn't 
 figure this out so that only one tape drive is being used.


There are hardware RAIT controllers available from Ultera 
(http://www.ultera.com/tapesolutions.htm). A RAIT level 0 array would 
allow a volume to be a group of two tapes with the data striped across 
the two tapes, essentially doubling read/write throughput, just like 
RAID-0. But to the OS, and Bacula, the RAIT-0 array looks like a single 
device.



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula FD not talking to SD

2011-06-28 Thread John Drescher
2011/6/28 John Malone john.mal...@bristol.gov.uk:
 Hi,

 output of client status at the same time as the log below:
 bristol-1622a6f-fd Version: 5.0.3 (04 August 2010)  VSS Linux Cross-compile
 Win32
 Daemon started 28-Jun-11 08:21. Jobs: run=0 running=0.
  Heap: heap=0 smbytes=17,535 max_bytes=17,763 bufs=77 max_bufs=81
  Sizeof: boffset_t=8 size_t=4 debug=0 trace=1

 Running Jobs:
 JobId 19 Job bristol-1622a6f-fd.2011-06-28_10.09.35_06 is running.
     Full Backup Job started: 28-Jun-11 10:08
     Files=0 Bytes=0 Bytes/sec=0 Errors=0
     Files Examined=0
     SDReadSeqNo=5 fd=516

 End of log file showing failure at the same tie as the status page above:

 24-Jun 11:37 bacula01-dir JobId 37: No prior Full backup Job record found.
 24-Jun 11:37 bacula01-dir JobId 37: No prior or suitable Full backup found
 in catalog. Doing FULL backup.
 24-Jun 11:37 bacula01-dir JobId 37: Start Backup JobId 37,
 Job=bristol-1622a6f-fd.2011-06-24_11.37.50_12
 24-Jun 11:37 bacula01-dir JobId 37: Using Device FileStorage
 24-Jun 11:45 bacula01-dir JobId 37: Fatal error: Socket error on Storage
 command: ERR=Interrupted system call
 24-Jun 11:45 bacula01-dir JobId 37: Fatal error: Network error with FD
 during Backup: ERR=Interrupted system call
 24-Jun 11:45 bacula01-dir JobId 37: Fatal error: No Job status returned from
 FD.
 24-Jun 11:45 bacula01-dir JobId 37: Bacula bacula01-dir 5.0.2 (28Apr10):
 24-Jun-2011 11:45:53
   Build OS:   i486-pc-linux-gnu debian squeeze/sid
   JobId:  37
   Job:    bristol-1622a6f-fd.2011-06-24_11.37.50_12
   Backup Level:   Full (upgraded from Incremental)
   Client: bristol-1622a6f-fd 5.0.3 (04Aug10)
 Linux,Cross-compile,Win32
   FileSet:    WinTest 2011-06-23 12:46:28
   Pool:   File (From Job resource)
   Catalog:    MyCatalog (From Client resource)
   Storage:    File (From Job resource)
   Scheduled time: 24-Jun-2011 11:37:47
   Start time: 24-Jun-2011 11:37:52
   End time:   24-Jun-2011 11:45:53
   Elapsed time:   8 mins 1 sec
   Priority:   10
   FD Files Written:   0
   SD Files Written:   0
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Software Compression:   None
   VSS:    no
   Encryption: no
   Accurate:   no
   Volume name(s):
   Volume Session Id:  3
   Volume Session Time:    1308908426
   Last Volume Bytes:  49,455,379 (49.45 MB)
   Non-fatal FD errors:    0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  Waiting on FD
   Termination:    Backup Canceled
 All the daemons seem to be running ok, it's just that the FD won't talk to
 the SD (I think). Any suggestions?


Do you have the SD on a port that the FD can access? I mean you can
not use localhost or 127.0.0.1 for the SD address and expect the FD to
connect.

 I also get Device is BLOCKED waiting to create a volume are the 2 related
 or are they completely separate problems?


Yes. These are separate problems. Do you have automatic labeling enabled?

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Steve Costaras



How would the the various parts communicate if you're running multiple 
instances on different ports?   I would think just by creating multiple 
jobs would create multiple socket streams and do the same thing.




On 2011-06-28 02:09, Christian Manal wrote:

  - File daemon is single threaded so is limiting backup performance. Is there 
was a way to start more than one stream at the same time for a single machine 
backup? Right now I have all the file systems for a single client in the same 
file set.

  - Tied in with above, accurate backups cut into performance even more when 
doing all the md5/sha1 calcs. Spliting this perhaps with above to multiple 
threads would really help.

  - How to stream a single job to multiple tape drives. Couldn't figure this 
out so that only one tape drive is being used.

  - spooling to disk first then to tape is a killer. if multiple streams could 
happen at once this may mitigate this or some type of continous spooling. How 
do others do this?


Hi,

I haven't tried, but shouldn't it be possible to run multiple instances
of FDs on different ports? You could split up the fileset into multiple
jobs which then can run concurrently on multiple FDs.


Regards,
Christian Manal

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Steve Costaras




Problem is not really just tape I/O speeds but the ability to get data 
to it.   I.e. the SD is running at about 50% cpu overhead right now 
(single core) so it could possible handle (2) LTO4 drives assuming a new 
SD is not spawned off per drive?


I don't really need 'rait' itself as that would double the probability 
of errors with pure bit striping.   Was thinking more of  one file to 
each or group of files so that files are intact on each tape (so if you 
loose a tape you're not loosing n*#drives worth of files, just the 
single tape's worth of files.




On 2011-06-28 10:01, Josh Fisher wrote:

On 6/27/2011 8:43 PM, Steve Costaras wrote:




 - How to stream a single job to multiple tape drives.   Couldn't
figure this out so that only one tape drive is being used.


There are hardware RAIT controllers available from Ultera
(http://www.ultera.com/tapesolutions.htm). A RAIT level 0 array would
allow a volume to be a group of two tapes with the data striped across
the two tapes, essentially doubling read/write throughput, just like
RAID-0. But to the OS, and Bacula, the RAIT-0 array looks like a single
device.



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Device

2011-06-28 Thread Mike Hobbs
I'm almost ready with putting bacula into production but I'm still 
debugging a few things and one of them is the Device line in my bacula 
mail reports, and I was hoping someone could help me with this.

I'm running disk-based backups.  Bacula 5.0.3, Vchanger 0.8.6 and a 
16-bay Promise jbod.

In my bacula-sd.conf file, I have these configurations:

Autochanger {
   Name = backup2-vchanger
   Device = jbod1-drive-1
   Device = jbod1-drive-2
   Device = jbod1-drive-3
   Device = jbod1-drive-4
   Device = jbod1-drive-5
   Device = jbod1-drive-6
   Device = jbod1-drive-7
   Device = jbod1-drive-8
   Device = jbod1-drive-9
   Device = jbod1-drive-10
   Device = jbod1-drive-11
   Device = jbod1-drive-12
   Device = jbod1-drive-13
   Device = jbod1-drive-14
   Device = jbod1-drive-15
   Device = jbod1-drive-16
   Changer Command = /usr/local/vchanger/bin/vchanger %c %o %S %a %d
   Changer Device = /usr/local/bacula/etc/vchanger.conf
}

Device {
   Name = jbod1-drive-1
   DriveIndex = 0
   Autochanger = yes;
   DeviceType = File
   MediaType = File
   ArchiveDevice = /usr/local/bacula/working/backup2-vchanger/0/drive0
   RemovableMedia = no;
   RandomAccess = yes;
}

Device {
   Name = jbod1-drive-2
   DriveIndex = 1
   Autochanger = yes;
   DeviceType = File
   MediaType = File
   ArchiveDevice = /usr/local/bacula/working/backup2-vchanger/1/drive1
   RemovableMedia = no;
   RandomAccess = yes;
}

etc..

I have a Device entry for each of my 16 drives (I am not even sure this 
is necessary because I am using the autochanger config above).

My question, bacula mail reports have this line in them:

28-Jun 13:10 mtl-backup2-dir JobId 1: Using Device jbod1-drive-1

The backup worked fine, but the backup actually wrote all data to 
jbod1-drive-16, not drive-1.  I can't seem to figure out where bacula 
is pulling the device name from.  I have written backups to many 
different drives, but regardless of the drive it writes to, the mail 
report always says it is using drive-1.  I'd like to get this to reflect 
what drive it actually wrote to, if this is possible.

Thank you all for any help you can offer,

mike

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Roy Sigurd Karlsbakk
Hi Out of curiosity, why do you do such forklift replacements when ZFS 
supports replacing individual drives, letting the pool resilver and then 
automatically grow to the new size? roy - Original Message -
 I have been using Bacula for over a year now and it has been providing
 'passable' service though I think since day one I have been streching
 it to it's limits or need a paradigm shift in how I am configuring it.
 Basically, I have a single server which has direct atached disk
 (~128TB / 112 drives) and Tape drives (LTO4). It's main function is a
 centralized file server  archival server. It has several mount points
 (~20) (ZFS) to break down some structures based on file size and
 intended use basically spawning a new mountpoint for anything  a
 couple TB or 100,000 files. Some file systems are up to 30TB in size
 others are only a handful of GB. With ~4,000,000 files anywhere from
 4KiB up to 32GiB in size.
 Data change is about 1-2TiB/month which is not that big of an issue.
 The problem is when I need to do full backups and restores (restores
 mainly ever 1-2 years when I have to do forklift replacement of
 drives). Bottlenecks that I see are:
 - File daemon is single threaded so is limiting backup performance. Is
 there was a way to start more than one stream at the same time for a
 single machine backup? Right now I have all the file systems for a
 single client in the same file set.
 - Tied in with above, accurate backups cut into performance even more
 when doing all the md5/sha1 calcs. Spliting this perhaps with above to
 multiple threads would really help.
 - How to stream a single job to multiple tape drives. Couldn't figure
 this out so that only one tape drive is being used.
 - spooling to disk first then to tape is a killer. if multiple streams
 could happen at once this may mitigate this or some type of continous
 spooling. How do others do this?
 At this point I'm starting to look at Arkeia  Netbackup both with
 provide multistreaming and tape drive pooling, but would rather stick
 or send my $$ to open source if I could opposed to closed systems.
 I'm at a point where I can't do a 20-30day full backup. And 'virtual
 fulls' are not an answer. There's no way I can tie up tape drives for
 the hundreds of tapes at 2.5 hours per tape assuming zero processing
 overhead. I have plenty of cpu on the system and plenty of disk
 subsystem speed, just can't seem to get at it through bacula.
 So what options are available or how are others backing up huge single
 servers?
 --
 All of the data generated in your IT infrastructure is seriously
 valuable.
 Why? It contains a definitive record of application performance,
 security
 threats, fraudulent activity, and more. Splunk takes this data and
 makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
-- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 
r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det 
essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ 
for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed 
opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer 
på norsk.--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Client / laptop backups

2011-06-28 Thread Roy Sigurd Karlsbakk
Hi all

We're using Bacula for some backups with three SDs so far, and I wonder if it's 
possible somehow to allow for client / laptop backups in a good manner. As far 
as I can see, this will need to either be client-initiated, client saying I'm 
alive! or something, or having a polling process running to check if the 
client's online for a given period of time.

Is something like this possible or in the works, or is Bacula intended only for 
server backups?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client / laptop backups

2011-06-28 Thread Sean Clark
On 06/28/2011 02:24 PM, Roy Sigurd Karlsbakk wrote:
 Hi all

 We're using Bacula for some backups with three SDs so far, and I wonder if 
 it's possible somehow to allow for client / laptop backups in a good manner. 
 As far as I can see, this will need to either be client-initiated, client 
 saying I'm alive! or something, or having a polling process running to 
 check if the client's online for a given period of time.

 Is something like this possible or in the works, or is Bacula intended only 
 for server backups?
The simplest way to handle this that I've found is to set up space for
the laptops to rsync to, and the run bacula's scheduled backups against
that (as a bonus, you then also have an immediately-readable copy of the
laptop's files if you need to suddenly recover some accidentally-deleted
file without needing to initiate a bacula restore).  We've got laptop
users here that we CAN'T seem to run full bacula backups on because they
never stay plugged in long enough to finish, so rsync is the only way we
can get full backups.

You could, alternatively, set up a way for clients to ssh into the
director with an account permitted to send a run (their job name) to
bconsole to initiate a backup manually.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client / laptop backups

2011-06-28 Thread Gavin McCullagh
Hi,

On Tue, 28 Jun 2011, Roy Sigurd Karlsbakk wrote:

 We're using Bacula for some backups with three SDs so far, and I wonder
 if it's possible somehow to allow for client / laptop backups in a good
 manner. As far as I can see, this will need to either be
 client-initiated, client saying I'm alive! or something, or having a
 polling process running to check if the client's online for a given
 period of time.
 
 Is something like this possible or in the works, or is Bacula intended
 only for server backups?

We do this in a somewhat manual way.  The client computer has a bconsole
configured.  When the laptop owner wants a backup, they start bconsole,
then type runret, yesret, quityes.  They then get a
confirmation email when the backup completes.

Gavin


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Device

2011-06-28 Thread Josh Fisher

On 6/28/2011 2:23 PM, Mike Hobbs wrote:
 I'm almost ready with putting bacula into production but I'm still
 debugging a few things and one of them is the Device line in my bacula
 mail reports, and I was hoping someone could help me with this.

 I'm running disk-based backups.  Bacula 5.0.3, Vchanger 0.8.6 and a
 16-bay Promise jbod.

 In my bacula-sd.conf file, I have these configurations:

 Autochanger {
 Name = backup2-vchanger
 Device = jbod1-drive-1
 Device = jbod1-drive-2
 Device = jbod1-drive-3
 Device = jbod1-drive-4
 Device = jbod1-drive-5
 Device = jbod1-drive-6
 Device = jbod1-drive-7
 Device = jbod1-drive-8
 Device = jbod1-drive-9
 Device = jbod1-drive-10
 Device = jbod1-drive-11
 Device = jbod1-drive-12
 Device = jbod1-drive-13
 Device = jbod1-drive-14
 Device = jbod1-drive-15
 Device = jbod1-drive-16
 Changer Command = /usr/local/vchanger/bin/vchanger %c %o %S %a %d
 Changer Device = /usr/local/bacula/etc/vchanger.conf
 }

 Device {
 Name = jbod1-drive-1
 DriveIndex = 0
 Autochanger = yes;
 DeviceType = File
 MediaType = File
 ArchiveDevice = /usr/local/bacula/working/backup2-vchanger/0/drive0
 RemovableMedia = no;
 RandomAccess = yes;
 }

 Device {
 Name = jbod1-drive-2
 DriveIndex = 1
 Autochanger = yes;
 DeviceType = File
 MediaType = File
 ArchiveDevice = /usr/local/bacula/working/backup2-vchanger/1/drive1
 RemovableMedia = no;
 RandomAccess = yes;
 }

 etc..

 I have a Device entry for each of my 16 drives (I am not even sure this
 is necessary because I am using the autochanger config above).

It isn't necessary. jbod1-drive-1, jbod1-drive-2, etc. are virtual 
drives. The ArchiveDevice path in each virtual drive contains a symlink. 
The symlink is created by vchanger to point to the folder containing the 
volume file that is to be used. Any of the virtual drives may be 
loaded with a volume from any of the magazines (ie physical drives). 
It is possible for all 16 of your virtual drives to be simultaneously 
writing to different volume files on the same physical drive. It just 
depends on which volumes are being used. vchanger knows which physical 
drive those volumes are on and sets the symlink for the SD device 
accordingly when Bacula issues the load command to load a volume into a 
SD device. With vchanger, there is no relationship between SD device 
(virtual drive) and physical drive, or at least not a direct relationship.

 My question, bacula mail reports have this line in them:

 28-Jun 13:10 mtl-backup2-dir JobId 1: Using Device jbod1-drive-1

 The backup worked fine, but the backup actually wrote all data to
 jbod1-drive-16, not drive-1.  I can't seem to figure out where bacula
 is pulling the device name from.  I have written backups to many
 different drives, but regardless of the drive it writes to, the mail
 report always says it is using drive-1.  I'd like to get this to reflect
 what drive it actually wrote to, if this is possible.

Most likely, all jobs were indeed written to SD device jbod1-drive-1. 
However, the volume file that was used was apparently located on the 
physical drive specified by the 16th magazine= line in the vchanger 
config file. What this is telling you is that you are not running jobs 
concurrently, so they are all using SD device jbod1-drive-1. They are 
going to different physical drives because they are being written to 
volume files which may reside on any of those physical drives. Bacula 
first selects a volume, then if that volume is not already loaded, then 
it selects a SD device to load the selected volume into and issues a 
load command to vchanger. vchanger responds to the load command by 
setting the symlink associated with that SD device to point to the 
physical drive where the selected volume resides. If you will not be 
running jobs concurrently, then there is no reason to have more than one 
SD device (virtual drive). That one virtual drive will still be able to 
write to volumes on any of the 16 physical drives, one at a time.

So, it is working as expected. :) When using vchanger, the physical 
drive (or drives) used by a job is reflected in the volume labels of the 
volumes that the job wrote to. Every magazine (physical drive) is 
assigned a unique magazine number when initialized. The magazine number 
is used as the zero-padded number between the '_' characters in the 
volume label. Volumes are associated with a physical drive, but SD 
devices (virtual drives) are not.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2

Re: [Bacula-users] Client / laptop backups

2011-06-28 Thread Gavin McCullagh
Hi,

On Tue, 28 Jun 2011, Gavin McCullagh wrote:

 We do this in a somewhat manual way.  The client computer has a bconsole
 configured.  When the laptop owner wants a backup, they start bconsole,
 then type runret, yesret, quityes.  They then get a
 confirmation email when the backup completes.

I should probably spell out exactly what we do:

 - configure a console on the director specifically for that laptop
 - configure a corresponding bconsole on the laptop
 - create ACLs for that console to access only the job, pool, devices, etc.
   that relate to it
 - set the default backup to incremental for that job (assuming that's what
   you want) so that's what runs immediately
 - configure a monthly virtual full backup to consolidate the incrementals
   into a new full backup and allow you to rotate the volumes
 - configure a dedicated messages entry on the director for this job so
   that the user gets a copy of the backup confirmation

This has worked really well for us.  It would be nice of course if a backup
could magically start on plugin, but then if the user unplugs shortly
afterward, you get a broken backup so we kind of feel that the owner
deciding to run a backup is not a bad thing.

A nice GUI console client for Windows would perhaps be an improvement but
the console commands are so simple that it's easy enough to just write
these steps up.

Gavin


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Bob Hetzel

Steve,

You should be able to run multiple file daemons on the storage device, but 
a better idea might be to run the backups (and restores) off the clients, 
as many in parallel as your system can handle.  Look into concurrency.

If you split up the fileset into separate jobs you can have them go to 
separate tape drives at the same time (look up concurrent settings in the 
manual--you'll have to set that in multiple places).

Years ago I ran into a similar issue... we had a SAN with e-mail stored on 
it.  The backup was done off a snapshot of the SAN mounted on the computer 
driving the tape drives.  Had this been the most powerful computer in the 
room that would have been great but unfortunately it was not up to the task 
of both taking data off the SAN and processing the tape drives as fast as 
they could go.  I never got around to moving whole scheme back to strictly 
client based backups (i.e. from the mail servers directly instead of from 
the backup server mounting them) but if I had it would have been better. 
The downside to that is that your system then becomes more complex and you 
have to make sure you don't back up anything twice as well as make sure you 
aren't missing the backup of anything important.

The next version of bacula (in the last week Kern said he'd have a beta in 
the next few weeks, so hang on to your hat!) one of the improvements is 
supposed to be a more efficient hashing algorithm, to boot.  It sounds like 
that will give a substantial increase in performance but that alone 
probably will not solve your problem.  I think you're going to have to do a 
lot of different configurations and test which ones work best for your 
design parameters (i.e. questions like How long can I go w/o a full 
backup and How long can I stand a complete disaster recovery restore 
taking).




 From: Steve Costaras stev...@chaven.com
 Subject: [Bacula-users] Performance options for single large (100TB)
   server  backup?
 To: bacula-users@lists.sourceforge.net
 Message-ID: W210986168202161309221804@webmail17
 Content-Type: text/plain; charset=utf-8



 I have been using Bacula for over a year now and it has been providing 
 'passable' service though I think since day one I have been streching it to 
 it's limits or need a paradigm shift in how I am configuring it.

 Basically, I have a single server which has direct atached disk (~128TB / 112 
 drives) and Tape drives (LTO4). It's main function is a centralized file 
 server  archival server. It has several mount points (~20) (ZFS) to break 
 down some structures based on file size and intended use basically spawning a 
 new mountpoint for anything  a couple TB or 100,000 files. Some file systems 
 are up to 30TB in size others are only a handful of GB. With ~4,000,000 files 
 anywhere from 4KiB up to 32GiB in size.

 Data change is about 1-2TiB/month which is not that big of an issue. The 
 problem is when I need to do full backups and restores (restores mainly ever 
 1-2 years when I have to do forklift replacement of drives). Bottlenecks that 
 I see are:

  - File daemon is single threaded so is limiting backup performance. Is there 
 was a way to start more than one stream at the same time for a single machine 
 backup? Right now I have all the file systems for a single client in the same 
 file set.

  - Tied in with above, accurate backups cut into performance even more when 
 doing all the md5/sha1 calcs. Spliting this perhaps with above to multiple 
 threads would really help.

  - How to stream a single job to multiple tape drives. Couldn't figure this 
 out so that only one tape drive is being used.

  - spooling to disk first then to tape is a killer. if multiple streams could 
 happen at once this may mitigate this or some type of continous spooling. How 
 do others do this?



 At this point I'm starting to look at Arkeia  Netbackup both with provide 
 multistreaming and tape drive pooling, but would rather stick or send my $$ 
 to open source if I could opposed to closed systems.

 I'm at a point where I can't do a 20-30day full backup. And 'virtual fulls' 
 are not an answer. There's no way I can tie up tape drives for the hundreds 
 of tapes at 2.5 hours per tape assuming zero processing overhead. I have 
 plenty of cpu on the system and plenty of disk subsystem speed, just can't 
 seem to get at it through bacula.

 So what options are available or how are others backing up huge single 
 servers?


 -- next part --
 An HTML attachment was scrubbed...

 --

 --
 All of the data generated in your IT infrastructure is seriously valuable.
 Why? It contains a definitive record of application performance, security
 threats, fraudulent activity, and more. Splunk takes this data and makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2

 --

 

Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Steve Costaras



Yes, in this case the 'client' is the backup server, as I had a free 
slot for the tape drives and due to the size didn't want to carry this 
over the network.


If I split this up to separate jobs, say one job per mount point (have 
~30 mount points at this time) that may work however I may be doing 
something wrong as the jobs are run sequentially not  concurrently, 
though as you mentioned I may be missing some setting in another file to 
accomplish that.


improved hashing would help though frankly the biggest item would be  to 
get rid of the 'double backup' once to spool and then to tape (rolling 
window of spooling or something like that) would be /much/ larger.


Right now I try to do full backups every 6 months or when a large 
ingress happens and a delta change is greater than ~1/5 the total size 
of storage.   My goal would be to try and get backups down to say ~7 
days on a single LTO4 or about 4 days with two LTO4 drives.(similar 
for a complete restore).


Only issue with really with multiple jobs opposed to multi-streaming in 
a single job would be the restore process having to restore from each 
file set separately opposed to just having a single index for the entire 
system and have bacula figure out  what jobs/file sets are needed.Or 
is there a way to accomplish this that I'm not seeing?



On 2011-06-28 19:04, Bob Hetzel wrote:

Steve,

You should be able to run multiple file daemons on the storage device, but
a better idea might be to run the backups (and restores) off the clients,
as many in parallel as your system can handle.  Look into concurrency.

If you split up the fileset into separate jobs you can have them go to
separate tape drives at the same time (look up concurrent settings in the
manual--you'll have to set that in multiple places).

Years ago I ran into a similar issue... we had a SAN with e-mail stored on
it.  The backup was done off a snapshot of the SAN mounted on the computer
driving the tape drives.  Had this been the most powerful computer in the
room that would have been great but unfortunately it was not up to the task
of both taking data off the SAN and processing the tape drives as fast as
they could go.  I never got around to moving whole scheme back to strictly
client based backups (i.e. from the mail servers directly instead of from
the backup server mounting them) but if I had it would have been better.
The downside to that is that your system then becomes more complex and you
have to make sure you don't back up anything twice as well as make sure you
aren't missing the backup of anything important.

The next version of bacula (in the last week Kern said he'd have a beta in
the next few weeks, so hang on to your hat!) one of the improvements is
supposed to be a more efficient hashing algorithm, to boot.  It sounds like
that will give a substantial increase in performance but that alone
probably will not solve your problem.  I think you're going to have to do a
lot of different configurations and test which ones work best for your
design parameters (i.e. questions like How long can I go w/o a full
backup and How long can I stand a complete disaster recovery restore
taking).





From: Steve Costarasstev...@chaven.com
Subject: [Bacula-users] Performance options for single large (100TB)
server  backup?
To: bacula-users@lists.sourceforge.net
Message-ID:W210986168202161309221804@webmail17
Content-Type: text/plain; charset=utf-8



I have been using Bacula for over a year now and it has been providing 
'passable' service though I think since day one I have been streching it to 
it's limits or need a paradigm shift in how I am configuring it.

Basically, I have a single server which has direct atached disk (~128TB / 112 drives) 
and Tape drives (LTO4). It's main function is a centralized file server  archival 
server. It has several mount points (~20) (ZFS) to break down some structures based on 
file size and intended use basically spawning a new mountpoint for anything  a 
couple TB or 100,000 files. Some file systems are up to 30TB in size others are only a 
handful of GB. With ~4,000,000 files anywhere from 4KiB up to 32GiB in size.

Data change is about 1-2TiB/month which is not that big of an issue. The 
problem is when I need to do full backups and restores (restores mainly ever 
1-2 years when I have to do forklift replacement of drives). Bottlenecks that I 
see are:

  - File daemon is single threaded so is limiting backup performance. Is there 
was a way to start more than one stream at the same time for a single machine 
backup? Right now I have all the file systems for a single client in the same 
file set.

  - Tied in with above, accurate backups cut into performance even more when 
doing all the md5/sha1 calcs. Spliting this perhaps with above to multiple 
threads would really help.

  - How to stream a single job to multiple tape drives. Couldn't figure this 
out so that only one tape drive is being used.

  - spooling to disk 

[Bacula-users] Job is waiting for a mount request

2011-06-28 Thread Venkatesh K Reddy
Hi,

I am a newbie to Bacula. I tried to find out what is happening in the list
and could not get right answer. Probably I am not searching with right
question or looking at right place.

We have a test setup with File storage. All daemons are able to talk to each
other and a small backup successfully finished. Here is the configuration of
Pool and Storage.

# Default pool definition
Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
}

# File Pool definition
Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
  Maximum Volume Bytes = 50G  # Limit Volume size to something
reasonable
  Maximum Volumes = 100   # Limit number of Volumes in Pool
  Label Format = FileShare
}

Here is the storage definition

Storage { # definition of myself
  Name = cincidr-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
}

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /home/backups/fileshare
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum File Size = 262144000;
  Maximum Volume Size = 262144000;
}

I tried to backup a large file set (20+ GB). It stopped half way with the
following message.

Running Jobs:
Console connected at 28-Jun-11 17:06
 JobId Level   Name   Status
==
 1 FullFileShare.2011-06-28_10.04.46_03 is waiting for a mount
request

Here is the status of storage.

Device status:
Device FileStorage (/home/backups/fileshare) open but no Bacula volume is
currently mounted.
Device is BLOCKED waiting for mount of volume FileShare0037,
   Pool:File
   Media type:  File
Total Bytes Read=0 Blocks Read=0 Bytes/block=0
Positioned at File=0 Block=0

I have already set Automatic Mount = Yes as part of configuration. I am not
sure where I am going wrong. Please help me.

thanks,

Venkatesh
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread John Drescher
2011/6/28 Venkatesh K Reddy venkat...@kaevee.com:
 Hi,

 I am a newbie to Bacula. I tried to find out what is happening in the list
 and could not get right answer. Probably I am not searching with right
 question or looking at right place.

 We have a test setup with File storage. All daemons are able to talk to each
 other and a small backup successfully finished. Here is the configuration of
 Pool and Storage.

 # Default pool definition
 Pool {
   Name = Default
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically recycle
 Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 365 days # one year
 }

 # File Pool definition
 Pool {
   Name = File
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically recycle
 Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 365 days # one year
   Maximum Volume Bytes = 50G  # Limit Volume size to something
 reasonable
   Maximum Volumes = 100   # Limit number of Volumes in Pool
   Label Format = FileShare
 }

 Here is the storage definition

 Storage { # definition of myself
   Name = cincidr-sd
   SDPort = 9103  # Director's port
   WorkingDirectory = /var/lib/bacula
   Pid Directory = /var/run
   Maximum Concurrent Jobs = 20
 }

 Device {
   Name = FileStorage
   Media Type = File
   Archive Device = /home/backups/fileshare
   LabelMedia = yes;   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;   # when device opened, read it
   RemovableMedia = no;
   AlwaysOpen = no;
   Maximum File Size = 262144000;
   Maximum Volume Size = 262144000;
 }

 I tried to backup a large file set (20+ GB). It stopped half way with the
 following message.

 Running Jobs:
 Console connected at 28-Jun-11 17:06
  JobId Level   Name   Status
 ==
  1 Full    FileShare.2011-06-28_10.04.46_03 is waiting for a mount
 request

 Here is the status of storage.

 Device status:
 Device FileStorage (/home/backups/fileshare) open but no Bacula volume is
 currently mounted.
     Device is BLOCKED waiting for mount of volume FileShare0037,
    Pool:    File
    Media type:  File
     Total Bytes Read=0 Blocks Read=0 Bytes/block=0
     Positioned at File=0 Block=0

 I have already set Automatic Mount = Yes as part of configuration. I am not
 sure where I am going wrong. Please help me.


Did you use the umount command recently?

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread Venkatesh K Reddy
I had used unmount. But, I nuked the whole configuration.

/etc/init.d/bacula-sd stop
/etc/init.d/bacula-dir stop
/usr/lib/bacula/drop_mysql_tables
/usr/lib/bacula/make_mysql_tables
rm -rf /var/lib/bacula/*
/etc/init.d/bacula-sd start
/etc/init.d/bacula-dir start

I did the same on the client.

Thanks,

Venkatesh K

On Wed, Jun 29, 2011 at 8:01 AM, John Drescher dresche...@gmail.com wrote:

 2011/6/28 Venkatesh K Reddy venkat...@kaevee.com:
  Hi,
 
  I am a newbie to Bacula. I tried to find out what is happening in the
 list
  and could not get right answer. Probably I am not searching with right
  question or looking at right place.
 
  We have a test setup with File storage. All daemons are able to talk to
 each
  other and a small backup successfully finished. Here is the configuration
 of
  Pool and Storage.
 
  # Default pool definition
  Pool {
Name = Default
Pool Type = Backup
Recycle = yes   # Bacula can automatically recycle
  Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
  }
 
  # File Pool definition
  Pool {
Name = File
Pool Type = Backup
Recycle = yes   # Bacula can automatically recycle
  Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 50G  # Limit Volume size to something
  reasonable
Maximum Volumes = 100   # Limit number of Volumes in Pool
Label Format = FileShare
  }
 
  Here is the storage definition
 
  Storage { # definition of myself
Name = cincidr-sd
SDPort = 9103  # Director's port
WorkingDirectory = /var/lib/bacula
Pid Directory = /var/run
Maximum Concurrent Jobs = 20
  }
 
  Device {
Name = FileStorage
Media Type = File
Archive Device = /home/backups/fileshare
LabelMedia = yes;   # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes;   # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum File Size = 262144000;
Maximum Volume Size = 262144000;
  }
 
  I tried to backup a large file set (20+ GB). It stopped half way with the
  following message.
 
  Running Jobs:
  Console connected at 28-Jun-11 17:06
   JobId Level   Name   Status
  ==
   1 FullFileShare.2011-06-28_10.04.46_03 is waiting for a mount
  request
 
  Here is the status of storage.
 
  Device status:
  Device FileStorage (/home/backups/fileshare) open but no Bacula volume
 is
  currently mounted.
  Device is BLOCKED waiting for mount of volume FileShare0037,
 Pool:File
 Media type:  File
  Total Bytes Read=0 Blocks Read=0 Bytes/block=0
  Positioned at File=0 Block=0
 
  I have already set Automatic Mount = Yes as part of configuration. I am
 not
  sure where I am going wrong. Please help me.
 

 Did you use the umount command recently?

 John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread John Drescher
On Tue, Jun 28, 2011 at 10:36 PM, Venkatesh K Reddy
venkat...@kaevee.com wrote:
 I had used unmount. But, I nuked the whole configuration.


If you ever use umount then the next volume that bacula will want you
will have to mount it since umount takes the storage device out of
bacula's control. If you just want to unload a volume without bacula
giving up control on the storage device use the release command
instead.

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] minimum full interval?

2011-06-28 Thread Craig Isdahl
 What are you looking for that Max Full Interval doesn't do?  If you set
 Max Full Interval to 30 days, and only schedule incremental jobs, then
 every 30 days one of your incrementals gets bumped to a full.  Are you
 looking to prevent someone from manually firing off a full?
 
 Mark

These are VM snapshots so each backup is a full.  We currently do exactly
what you recommend for file system backups and it works great - I'm just not
able to figure out how to do 1 full backup every 30 days (no diff or incr)
w/out scheduling specific times, which I'd like to avoid due to the number
of clients.

-- Craig


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread Venkatesh K Reddy
I don't think I have used unmount after removing old state files and nuking
all data in sql database.

I had started afresh and backup ran smoothly and created about 30 volumes
(256MB each) and then stopped with following message.

Running Jobs:
Console connected at 28-Jun-11 18:45
 JobId Level   Name   Status
==
 1 FullFileShare.2011-06-28_10.04.46_03 is waiting for a mount
request

When I checked the storage status I got the following info.

Device status:
Device FileStorage (/home/backups/fileshare) open but no Bacula volume is
currently mounted.
Device is BLOCKED waiting for mount of volume FileShare0037,
   Pool:File
   Media type:  File
Total Bytes Read=0 Blocks Read=0 Bytes/block=0
Positioned at File=0 Block=0

I tried to look up in mailing lists and could not find any post similar to
problem I am facing.

Thanks,

Venkatesh K

On Wed, Jun 29, 2011 at 8:13 AM, John Drescher dresche...@gmail.com wrote:

 On Tue, Jun 28, 2011 at 10:36 PM, Venkatesh K Reddy
 venkat...@kaevee.com wrote:
  I had used unmount. But, I nuked the whole configuration.
 

 If you ever use umount then the next volume that bacula will want you
 will have to mount it since umount takes the storage device out of
 bacula's control. If you just want to unload a volume without bacula
 giving up control on the storage device use the release command
 instead.

 John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread John Drescher
On Tue, Jun 28, 2011 at 10:59 PM, Venkatesh K Reddy
venkat...@kaevee.com wrote:
 I don't think I have used unmount after removing old state files and nuking
 all data in sql database.

 I had started afresh and backup ran smoothly and created about 30 volumes
 (256MB each) and then stopped with following message.

 Running Jobs:
 Console connected at 28-Jun-11 18:45
  JobId Level   Name   Status
 ==
  1 Full    FileShare.2011-06-28_10.04.46_03 is waiting for a mount
 request

 When I checked the storage status I got the following info.

 Device status:
 Device FileStorage (/home/backups/fileshare) open but no Bacula volume is
 currently mounted.
     Device is BLOCKED waiting for mount of volume FileShare0037,
    Pool:    File
    Media type:  File
     Total Bytes Read=0 Blocks Read=0 Bytes/block=0
     Positioned at File=0 Block=0

 I tried to look up in mailing lists and could not find any post similar to
 problem I am facing.


Does the volume FileShare0037 exist in /home/backups/fileshare?

Do you have any limits on how many volumes in your pool?

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread Venkatesh K Reddy
Hi,

On Wed, Jun 29, 2011 at 8:52 AM, John Drescher dresche...@gmail.com wrote:

 On Tue, Jun 28, 2011 at 10:59 PM, Venkatesh K Reddy
 venkat...@kaevee.com wrote:
  I don't think I have used unmount after removing old state files and
 nuking
  all data in sql database.
 
  I had started afresh and backup ran smoothly and created about 30 volumes
  (256MB each) and then stopped with following message.
 
  Running Jobs:
  Console connected at 28-Jun-11 18:45
   JobId Level   Name   Status
  ==
   1 FullFileShare.2011-06-28_10.04.46_03 is waiting for a mount
  request
 
  When I checked the storage status I got the following info.
 
  Device status:
  Device FileStorage (/home/backups/fileshare) open but no Bacula volume
 is
  currently mounted.
  Device is BLOCKED waiting for mount of volume FileShare0037,
 Pool:File
 Media type:  File
  Total Bytes Read=0 Blocks Read=0 Bytes/block=0
  Positioned at File=0 Block=0
 
  I tried to look up in mailing lists and could not find any post similar
 to
  problem I am facing.
 

 Does the volume FileShare0037 exist in /home/backups/fileshare?


No. It does not exist.



 Do you have any limits on how many volumes in your pool?


Yes. The limit is 100.

# File Pool definition
Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
  Maximum Volume Bytes = 50G  # Limit Volume size to something
reasonable
  Maximum Volumes = 100   # Limit number of Volumes in Pool
  Label Format = FileShare
}

Thanks,

Venkatesh K
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread John Drescher
On Tue, Jun 28, 2011 at 11:36 PM, Venkatesh K Reddy
venkat...@kaevee.com wrote:
 Hi,

 On Wed, Jun 29, 2011 at 8:52 AM, John Drescher dresche...@gmail.com wrote:

 On Tue, Jun 28, 2011 at 10:59 PM, Venkatesh K Reddy
 venkat...@kaevee.com wrote:
  I don't think I have used unmount after removing old state files and
  nuking
  all data in sql database.
 
  I had started afresh and backup ran smoothly and created about 30
  volumes
  (256MB each) and then stopped with following message.
 
  Running Jobs:
  Console connected at 28-Jun-11 18:45
   JobId Level   Name   Status
  ==
   1 Full    FileShare.2011-06-28_10.04.46_03 is waiting for a mount
  request
 
  When I checked the storage status I got the following info.
 
  Device status:
  Device FileStorage (/home/backups/fileshare) open but no Bacula volume
  is
  currently mounted.
      Device is BLOCKED waiting for mount of volume FileShare0037,
     Pool:    File
     Media type:  File
      Total Bytes Read=0 Blocks Read=0 Bytes/block=0
      Positioned at File=0 Block=0
 
  I tried to look up in mailing lists and could not find any post similar
  to
  problem I am facing.
 

 Does the volume FileShare0037 exist in /home/backups/fileshare?

 No. It does not exist.


I believe it should at that point. You may want to look at the logs to
see if there were any error messages. You are sure you did not run out
of space?


 Do you have any limits on how many volumes in your pool?

 Yes. The limit is 100.


Does that volume exist in the output of:

list media pool=Backup

You can execute that in bconsole. If it does exist is this the last volume?

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting for a mount request

2011-06-28 Thread Venkatesh K Reddy
John,

Please accept my apologies. It was disk space issue after all. The server
had 1.7T and our administrator mounted the backup partition in wrong path. I
was stupid enough not to check the mount points.

Tons of thanks for your valuable time and pointing me to right direction. I
am glad to be part of wonderful Bacula community.

Thanks again,

Venkatesh K

On Wed, Jun 29, 2011 at 9:14 AM, John Drescher dresche...@gmail.com wrote:

 On Tue, Jun 28, 2011 at 11:36 PM, Venkatesh K Reddy
 venkat...@kaevee.com wrote:
  Hi,
 
  On Wed, Jun 29, 2011 at 8:52 AM, John Drescher dresche...@gmail.com
 wrote:
 
  On Tue, Jun 28, 2011 at 10:59 PM, Venkatesh K Reddy
  venkat...@kaevee.com wrote:
   I don't think I have used unmount after removing old state files and
   nuking
   all data in sql database.
  
   I had started afresh and backup ran smoothly and created about 30
   volumes
   (256MB each) and then stopped with following message.
  
   Running Jobs:
   Console connected at 28-Jun-11 18:45
JobId Level   Name   Status
   ==
1 FullFileShare.2011-06-28_10.04.46_03 is waiting for a mount
   request
  
   When I checked the storage status I got the following info.
  
   Device status:
   Device FileStorage (/home/backups/fileshare) open but no Bacula
 volume
   is
   currently mounted.
   Device is BLOCKED waiting for mount of volume FileShare0037,
  Pool:File
  Media type:  File
   Total Bytes Read=0 Blocks Read=0 Bytes/block=0
   Positioned at File=0 Block=0
  
   I tried to look up in mailing lists and could not find any post
 similar
   to
   problem I am facing.
  
 
  Does the volume FileShare0037 exist in /home/backups/fileshare?
 
  No. It does not exist.
 

 I believe it should at that point. You may want to look at the logs to
 see if there were any error messages. You are sure you did not run out
 of space?

 
  Do you have any limits on how many volumes in your pool?
 
  Yes. The limit is 100.
 

 Does that volume exist in the output of:

 list media pool=Backup

 You can execute that in bconsole. If it does exist is this the last volume?

 John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VXA-2 Tape not filling

2011-06-28 Thread Christian Tardif


  
  
On 23/06/2011 10:06, Brian Debelius wrote:

  I
found something strange. If I try to issue this command:

mt -f /dev/nst0 status

I'll get one line that says:

Tape block size 0 bytes. Density code 0x81 (DLT 15GB
compressed).

Isn't that strange? I'm trying to understand what this density
code is doing there. tapeinfo reports this density code as well
(which should, anyway) but says that Partition 0 Size in KBytes
is 76787712.
  
  
  From my experience the density code does not mean anything. The
  tape block size of 0 bytes indicates that the tape drive is set
  for variable block size which is what bacula wants by default.
  You may want to play with this and set it to a fixed larger size
  for performance, after you get things working. I use 256K blocks.

Take a look at the screenshot (Media-B and C). It's actually the
exact same media (Media-B has been purged, and relabelled as C). At
the time of backup, Media-B appeared as full at 27.71GB. Now, C is
full at 57.76GB. Not a bit of change in the config. And the VXA-2
should backup to 80GB uncompressed.

Go figure...



Christian...
  

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users