Re: [Bacula-users] Feature requests?

2012-05-13 Thread Ralf Gross
Roy Sigurd Karlsbakk schrieb:
> 
> I filed a feature request on
> http://bugs.bacula.org/view.php?id=1866, only to have it closed with
> reason "no change required". Is there another preferred way to file
> feature requests than using the bug tracker? If so, can you please
> remove the "feature" level, or at least document the reason for this
> level? As for the feature request, allowing independant upgrade of
> SD and Director, I find this a rather important one in those cases
> where you have multiple SDs (I have four now, two more to come, and
> I've talked with other people having even more).


http://www.bacula.org/en/?page=feature-request

or

http://www.baculasystems.com/support


Ralf

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spooling question

2011-12-04 Thread Ralf Gross
Phil Stracchino schrieb:
> I've just acquired an LTO4 drive, and am setting up spooling for the
> first time.  (The machine with the LTO4 drive attached has mirrored SSDs
> and a 6GB/s SAS controller, so it's a great setup for spooling.)
> There's one thing I'm not clear on:  It appears to me that spooling is
> enabled on a job-by-job level, rather than device-by-device.  Since my
> plan is to have Full backups run to tape while incrementals and
> differentials run to disk, what I really want is to be able to have
> *all* jobs spooled *if and only if* being written to the tape drive on
> babylon5-sd, but not if writing to the 12TB ZFS disk array on babylon4-sd.
> 
> Can this be done?  Or is spool enabling strictly job-by-job?


you can override it in the job schedule.

http://www.bacula.org/en/dev-manual/main/main/Data_Spooling.html

---
To override the Job specification in a Schedule Run directive in the
Director's conf file.

SpoolData = yesno 
---


Ralf

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Label barcodes -> unsure if bacula labels everything

2011-11-14 Thread Ralf Gross
Denny Schierz schrieb:
> hi,
> 
> I've printed some barcodes for our LTO-2 Autochanger which is also used for 
> Amanda. For testing I want to label the tapes from 001 till 020 and not more:
> 
> [..]

better use 'label barcode slots 1-20'

Ralf

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple spool files per job

2011-10-08 Thread Ralf Gross
James Harper schrieb:
> Is there a way to make bacula write multiple spool files per job? Two
> would do. What I'm seeing is that 4 jobs start, all hit their spool
> limit around the same time, then all wait in a queue until the file is
> despooled. The despool happens fairly quickly (much quicker than the
> spooling due to network and server fd throughput) so it isn't a huge
> problem, but it would be better if the sd could just switch over to
> another spool file when despooling starts so that the backup can
> continue uninterrupted.
> 
> I'm spooling to internal RAID, then despooling to external USB. While
> spooling isn't really advised when the backup target is a disk, doing it
> this way means I can run multiple jobs at once without causing
> interleaving in the backup file (single sd volume) or severe filesystem
> fragmentation (if one sd volume per job). Internal RAID writes at
> ~100MB/second while the USB disk writes at ~30MB/second so it turns out
> to be a pretty effective way to do what I want except that despooling is
> causing a bottleneck.
> 
> Any suggestions?

No, this has been on the feature requests list for a while now.
Spooling nearly doubles my time for large backups. 

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/15351

Item 10: Concurrent spooling and despooling within a single job.
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

Ralf

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Catalog

2011-06-02 Thread Ralf Gross
John Drescher schrieb:
> On Thu, Jun 2, 2011 at 5:10 AM, Ralf Gross  wrote:
> > John Drescher schrieb:
> >> On Thu, Jun 2, 2011 at 4:05 AM, Ralf Gross  wrote:
> >> > Rickifer Barros schrieb:
> >> >> Yes John...
> >> >>
> >> >> Now, I think that I understood perfectly and I have tested it too.
> >> >>
> >> >> VolumeToCatalog = Compares the files in a Storage Volume with the 
> >> >> Catalog;
> >> >
> >> >
> >> > VolumeToCatalog does not read the file content and compares it, it just
> >> > reads the attributs and compares them (md5sum).
> >> >
> >> > quote:
> >> > http://www.bacula.org/en/dev-manual/main/main/Configuring_Director.html
> >> >
> >> > VolumeToCatalog
> >> >    This level causes Bacula to read the file attribute data written
> >> > to the Volume from the last Job. The file attribute data are compared
> >> > to the values saved in the Catalog database and any differences are
> >> > reported. This is similar to the Catalog level except that instead of
> >> > comparing the disk file attributes to the catalog database, the
> >> > attribute data written to the Volume is read and compared to the
> >> > catalog database.
> >>
> >> This part to me says it does compare the data on the volume with the
> >> hash MD5/SHA1 that is stored in the database.
> >
> >
> > I always thought that a VolumeToCatalog job would not read the
> > data written to the volume and calculate the md5sum again. Instead it
> > would just read the file attributes that were written to the volume and
> > compare that with the attributes in the catalog.
> >
> > At least the volume format described in developers.pdf suggest that the 
> > md5sum
> > information is part of the data stream.
> >
> > 10.7 Record Header
> >
> > #define STREAM_UNIX_ATTRIBUTES 1 /* Generic Unix attributes */
> > #define STREAM_FILE_DATA 2 /* Standard uncompressed data */
> > #define STREAM_MD5_SIGNATURE 3 /* MD5 signature for the file */
> > #define STREAM_GZIP_DATA 4 /* GZip compressed file data */
> > /* Extended Unix attributes with Win32 Extended data. Deprecated. */
> > #define STREAM_UNIX_ATTRIBUTES_EX 5 /* Extended Unix attr for Win32 EX */
> > #define STREAM_SPARSE_DATA 6 /* Sparse data stream */
> > #define STREAM_SPARSE_GZIP_DATA 7
> > #define STREAM_PROGRAM_NAMES 8 /* program names for program data */
> > #define STREAM_PROGRAM_DATA 9 /* Data needing program */
> > #define STREAM_SHA1_SIGNATURE 10 /* SHA1 signature for the file */
> > #define STREAM_WIN32_DATA 11 /* Win32 BackupRead data */
> > #define STREAM_WIN32_GZIP_DATA 12 /* Gzipped Win32 BackupRead data */
> > #define STREAM_MACOS_FORK_DATA 13 /* Mac resource fork */
> > #define STREAM_HFSPLUS_ATTRIBUTES 14 /* Mac OS extra attributes */
> > #define STREAM_UNIX_ATTRIBUTES_ACCESS_ACL 15 /* Standard ACL attributes on 
> > UNIX */
> > #define STREAM_UNIX_ATTRIBUTES_DEFAULT_ACL 16 /* Default ACL attributes on 
> > UNIX */
> >
> >
> >
> > So I think that only the md5sums on the volume are compared with the 
> > md5sums in
> > the catalog. If the file data on the volume was damaged somehow this could 
> > not
> > be detected.
> >
> 
> The reason I think it is reading the volume is the comparison to the
> Catalog level which has to read the filesystem data to compare that to
> the md5sum in the catalog.
> 
> http://bacula.org/5.0.x-manuals/en/main/main/Using_Bacula_Improve_Comput.html

Hm, I really don't know.

VolumeToCatalog
"This level causes Bacula to read the _file_attribute_data_ written to
the Volume from the last Job."

The best way would to test it or ask Kern on the dev list. I see you
already have a test case ;)

Ralf

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Catalog

2011-06-02 Thread Ralf Gross
John Drescher schrieb:
> On Thu, Jun 2, 2011 at 4:05 AM, Ralf Gross  wrote:
> > Rickifer Barros schrieb:
> >> Yes John...
> >>
> >> Now, I think that I understood perfectly and I have tested it too.
> >>
> >> VolumeToCatalog = Compares the files in a Storage Volume with the Catalog;
> >
> >
> > VolumeToCatalog does not read the file content and compares it, it just
> > reads the attributs and compares them (md5sum).
> >
> > quote:
> > http://www.bacula.org/en/dev-manual/main/main/Configuring_Director.html
> >
> > VolumeToCatalog
> >    This level causes Bacula to read the file attribute data written
> > to the Volume from the last Job. The file attribute data are compared
> > to the values saved in the Catalog database and any differences are
> > reported. This is similar to the Catalog level except that instead of
> > comparing the disk file attributes to the catalog database, the
> > attribute data written to the Volume is read and compared to the
> > catalog database.
> 
> This part to me says it does compare the data on the volume with the
> hash MD5/SHA1 that is stored in the database.


I always thought that a VolumeToCatalog job would not read the
data written to the volume and calculate the md5sum again. Instead it
would just read the file attributes that were written to the volume and
compare that with the attributes in the catalog.

At least the volume format described in developers.pdf suggest that the md5sum
information is part of the data stream.

10.7 Record Header

#define STREAM_UNIX_ATTRIBUTES 1 /* Generic Unix attributes */
#define STREAM_FILE_DATA 2 /* Standard uncompressed data */
#define STREAM_MD5_SIGNATURE 3 /* MD5 signature for the file */
#define STREAM_GZIP_DATA 4 /* GZip compressed file data */
/* Extended Unix attributes with Win32 Extended data. Deprecated. */
#define STREAM_UNIX_ATTRIBUTES_EX 5 /* Extended Unix attr for Win32 EX */
#define STREAM_SPARSE_DATA 6 /* Sparse data stream */
#define STREAM_SPARSE_GZIP_DATA 7
#define STREAM_PROGRAM_NAMES 8 /* program names for program data */
#define STREAM_PROGRAM_DATA 9 /* Data needing program */
#define STREAM_SHA1_SIGNATURE 10 /* SHA1 signature for the file */
#define STREAM_WIN32_DATA 11 /* Win32 BackupRead data */
#define STREAM_WIN32_GZIP_DATA 12 /* Gzipped Win32 BackupRead data */
#define STREAM_MACOS_FORK_DATA 13 /* Mac resource fork */
#define STREAM_HFSPLUS_ATTRIBUTES 14 /* Mac OS extra attributes */
#define STREAM_UNIX_ATTRIBUTES_ACCESS_ACL 15 /* Standard ACL attributes on UNIX 
*/
#define STREAM_UNIX_ATTRIBUTES_DEFAULT_ACL 16 /* Default ACL attributes on UNIX 
*/



So I think that only the md5sums on the volume are compared with the md5sums in
the catalog. If the file data on the volume was damaged somehow this could not
be detected.

Ralf

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Catalog

2011-06-02 Thread Ralf Gross
Rickifer Barros schrieb:
> Yes John...
> 
> Now, I think that I understood perfectly and I have tested it too.
> 
> VolumeToCatalog = Compares the files in a Storage Volume with the Catalog;


VolumeToCatalog does not read the file content and compares it, it just
reads the attributs and compares them (md5sum).

quote:
http://www.bacula.org/en/dev-manual/main/main/Configuring_Director.html

VolumeToCatalog
This level causes Bacula to read the file attribute data written
to the Volume from the last Job. The file attribute data are compared
to the values saved in the Catalog database and any differences are
reported. This is similar to the Catalog level except that instead of
comparing the disk file attributes to the catalog database, the
attribute data written to the Volume is read and compared to the
catalog database. Although the attribute data including the signatures
(MD5 or SHA1) are compared, the actual file data is not compared (it
is not in the catalog). 


Ralf

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO5 not filling

2011-05-27 Thread Ralf Gross
Juan Pablo Lorier schrieb:
> > ...
> > Device {
> > Name = LTO-5  
> > Drive Index = 0
> > Media Type = LTO-5
> > Archive Device = /dev/nst0
> > AutomaticMount = yes;   # when device opened, read it
> > AlwaysOpen = yes;
> > RemovableMedia = yes;
> > RandomAccess = no;
> > AutoChanger = no;
> > Maximum File Size = 10GB;
> > }
> > ...


do you have "Maximum Volume Size" in your pool config?

Ralf

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Progress

2011-05-12 Thread Ralf Gross
Rickifer Barros schrieb:
> There's some way to see the progress of a Backup or Restore during your
> execution? Because with the "status dir" command, I can just see the "backup
> is running" and nothing more. I didn't found nothing in the Manual about
> that too.

try "status client=yourclientname". This will show you the amount of
data that was already backed up. At least for backup jobs.

Ralf

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-29 Thread Ralf Gross
Jason Voorhees schrieb:
> > I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> > fast is the disk where you spool file is locatet?
> >
> > A different test would be to create a 10 GB file with data from
> > /dev/urandom in the spool directory and the write this file to tape
> > (eg. nst0). Note: this will overwrite your existing data on tape and
> > you might have to release the drive in bacula.
> >
> > dd if=/spoolfile-directory/testfile of=/dev/nst0 bs=xxxk (your bacula block 
> > size)
> >
> >
> > Ralf
> 
> Ok, I made this:
> [root@qsrpsbk1 spool]# dd if=/dev/urandom of=random.tst bs=1M count=10240
> 
> [root@qsrpsbk1 spool]# dd if=random.tst of=/dev/st0 bs=2048k count=5000
> 5000+0 records in
> 5000+0 records out
> 1048576 bytes (10 GB) copied, 160.232 s, 65.4 MB/s
> 
> The block size of 2048k is the same I'm using at Bacula Storage Daemon
> configuration (Maximum Block Size = 2097152). Why the speed of dd was
> 65 MB/s when my Despooling speed is about ~80-90 MB/s?


This is too slow.  Try a block size of 256k with dd, this is what I
use in bacula too. Or try different block sizes beginning by 64k to see
how the speed changes.

This are my SD settings.

Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144


 
> Well, these are my results of a bonnie++ test:
> ...
> Version 1.03e   --Sequential Output-- --Sequential Input- 
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> qsrpsbk1.qui 36200M 77100  96 108223  18 48701   7 77012  93 142733
> 8 681.7   0
> --Sequential Create-- Random 
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
> %CP
>  16 + +++ + +++ + +++ + +++ + +++ + 
> +++
> qsrpsbk1.mydomain.com,36200M,77100,96,108223,18,48701,7,77012,93,142733,8,681.7,0,16,+,+++,+,+++,+,+++,+,+++,+,+++,+,+++
> 
> 
> What do you think about these tests?



This disk is too slow for spooling. Even with LTO4 you need a drive
that is capable of stream the data with > 150 MB/s. I use a raid10
with 6 WD VelociRaptor drives. 


Ralf



--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Ralf Gross
Jason Voorhees schrieb:
> 
> I think I was confusing some terms. The speed I reported was the total
> elapsed time that my backup took. But now according to your comments I
> got this from my logs:
> 
> With spooling enabled:
> 
> - Job write elapsed time: 102 MB/s average
> - Despooling elapsed time: 84 MB/s average
> 
> 
> Without spooling enabled:
> 
> - Job write elapsed time: 68 MB/s average
> 
> These are averages obtained from a group of 5 or more jobs of each
> case (with and without spooling). So I can see that with spooling
> enabled the process of writing to tape get higher speeds than
> copy-from-fd/write-to-tape without spooling enabled.
> 
> Now the question is, why am I getting so low despooling speeds if I
> use LTO-5 tapes? Shouldn't I have higher speeds than you with LTO-4
> tapes?
> ...

I got the biggest gain by changing "Maximum File Size" to 5 GB. How
fast is the disk where you spool file is locatet?

A different test would be to create a 10 GB file with data from
/dev/urandom in the spool directory and the write this file to tape
(eg. nst0). Note: this will overwrite your existing data on tape and
you might have to release the drive in bacula.

dd if=/spoolfile-directory/testfile of=/dev/nst0 bs=xxxk (your bacula block 
size)


Ralf

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Ralf Gross
Jason Voorhees schrieb:
> 
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests report me a bandwidth of 112 MB/s.
> 
> I'm not using any spooling configuration and I'm running concurrent
> jobs, just only one. This is the configuration of my fileset:
> ...

to get the maximum speed with your LTO-5 drive you should enable data
spooling and change the "Maximum File Size" parameter. The spool disk
must be a fast one, especially if you want to run concurrent jobs.
Forget hdparm as benchmark, use bonnie++, tiobench, iozone.

Then after after you have enabled spooling, restarted the SD, start
the backup. Look at the log file for "Despooling elapsed time". This
will show you how fast the spool file can be written to tape. 

The backup time will increase overall because the data will first be
written to disk and then to tape, but at least you eliminate the
network and the data source (server) as bottleneck.

With spooling enabled and LTO-4 drives I get up 100 - 140 MB/s.

Ralf

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Job for certain Backup Job

2011-04-26 Thread Ralf Gross
Krysztofiak schrieb:
> Hello,
> is there a possibility to run Verify Job for a certain Backup Job (not the 
> last one) with Level=VolumeToCatalog?
> For example I run two Backup Jobs with ids 1 and 2 and then I want to Verify 
> the first one.

that's not possible at the moment with 5.0.x, but it's on the feature
request list for version 5.1 and seems already be completed.

http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

Item*12: Add ability to Verify any specified Job.

Ralf

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-15 Thread Ralf Gross
Christian Manal schrieb:
> Thanks for the pointer. From Solaris 10 malloc(3C):
> 
>  [...] After free()
>  is executed, this space is made available for further  allo-
>  cation  by  the application, though not returned to the sys-
>  tem. Memory is returned to the system only upon  termination
>  of  the  application. [...]
> 
> So I have to restart Bacula after everything is done to get the memory
> back. That kinda sucks and is clearly not Bacula's fault.


you may want to take a look at 2 recent bug reports:

0001686: Memory Bacula-fd is not released after completed backup
0001690: memory leak in bacula-fd (with accurate=yes)   

http://bugs.bacula.org/

Ralf

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] problem with etc/init.d/bacula-director

2011-02-26 Thread Ralf Gross
Ralf Gross schrieb:
> giannife fe schrieb:
> > Hi, my bacula works perfectly, except that I have to start manually
> > bacula-dir (bacula-sd and bacula-fd start as they should).
> > I've got bacula 5.0.2 precompiled for ubuntu.
> > 
> > Here is the output of some commands:
> > root@dragon:/home/ego# /etc/init.d/bacula-director start
> >  * Starting Bacula Director...
> >                                        [ OK ]
> > 
> > root@dragon:/home/ego# /etc/init.d/bacula-director status
> >  * bacula-dir is not running
> 
> 
> the init script is usually named /etc/init.d/bacula-dir, what linux
> distribution are you running? How did you install bacula?

missed that you're running ubuntu...

Ralf

--
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] problem with etc/init.d/bacula-director

2011-02-26 Thread Ralf Gross
giannife fe schrieb:
> Hi, my bacula works perfectly, except that I have to start manually
> bacula-dir (bacula-sd and bacula-fd start as they should).
> I've got bacula 5.0.2 precompiled for ubuntu.
> 
> Here is the output of some commands:
> root@dragon:/home/ego# /etc/init.d/bacula-director start
>  * Starting Bacula Director...
>                                        [ OK ]
> 
> root@dragon:/home/ego# /etc/init.d/bacula-director status
>  * bacula-dir is not running


the init script is usually named /etc/init.d/bacula-dir, what linux
distribution are you running? How did you install bacula?

Is the bacula-dir daemon really not running after starting it? Have
you checked with ps or netstat?

You can always start the init script with '/bin/sh -x 
/etc/init.d/bacula-director
start' and you will get a bit more info about what happens.

Ralf

--
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] no upgrade path for windows servers?

2011-02-09 Thread Ralf Gross
Jeff Shanholtz schrieb:
> ...  
> Finally, I assume that if I stay with 3.0.1 on the server, I can upgrade
> clients to 5.0.3, correct? I'm crossing my fingers that the client "status"
> window will be a little more informative than just a big blank window like
> it is in 3.0.1. :)

don't know about the other things, but newer clients can't connect to
an older director. The other way is ok, new director and older
clients.

Ralf


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] usb LTO4/5 tape

2011-02-08 Thread Ralf Gross
Alessandro Bono schrieb:
> 
> I need an LTO4 or LTO5 tape with usb interface, someone know a similar
> beast?

I don't think there is something like this. LTO-4 needs a minimum
speed of 40 MB/s, if it falls below that the drive starts shoe-shining
which will damage the tapes and drive heads sooner or later.

Maybe with USB3, but I doubt that.

ralf

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula paid support

2011-02-02 Thread Ralf Gross
John Drescher schrieb:
> > Does anyone know or have experience with a company that offers a bacula
> > support contract? I just talked with  a rep from Bacula Systems in
> > Switzerland, the prices were much more than we can afford.
> >
> 
> There are a couple of users on this list that provide paid support.
> One I can remember is Arno Lehmann. However I do not remember seeing
> an email from him in a long time.

that may be (or not) because of

http://www.baculasystems.com/index.php/company/our-team/arno-lehmann.html

Ralf

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape compression not working LTO-5

2011-01-15 Thread Ralf Gross
Arunav Mandal schrieb:
> I have a LT05 tape drive and I am getting only 1.5TB per tape even the tape
> compression is on. I am using st driver for my tape drive.

What tape brands/types are you using?

Ralf

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore ok, file permissions/owner wrong

2010-12-15 Thread Ralf Gross
Martin Simmons schrieb:
> >>>>> On Wed, 15 Dec 2010 09:44:43 +0100, Ralf Gross said:
> > 
> > I had to do a restore 1,5K files, the files that needed to be restored
> > were known. So I choose restore option 7 (list of files).
> > 
> > The restore was done and all seemd fine, file permissions and ACLs of
> > the restored files were correct. _But_ bacula also changed the
> > ownership/permissions/acls of the directories that werr in the path to
> > the restored files. The ownership was changed to root:root.
> > 
> > If I choose restore option 5 (last backup) and restore all files, the
> > owndership is set correct.
> > 
> > Is this a known problem? This makes the restore nearly useless,
> > beacuse users can not access the files/directories any more.
> 
> It uses root:root when it creates a directory, but it shouldn't change the
> owner of existing directories unless you marked them for restore.

Hm, okay. I restored the files to /tmp/bacula-restores/ and then moved
them to the final destination. Seems that this was no good idea.

Ralf

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore ok, file permissions/owner wrong

2010-12-15 Thread Ralf Gross
Hi,

I had to do a restore 1,5K files, the files that needed to be restored
were known. So I choose restore option 7 (list of files).

The restore was done and all seemd fine, file permissions and ACLs of
the restored files were correct. _But_ bacula also changed the
ownership/permissions/acls of the directories that werr in the path to
the restored files. The ownership was changed to root:root.

If I choose restore option 5 (last backup) and restore all files, the
owndership is set correct.

Is this a known problem? This makes the restore nearly useless,
beacuse users can not access the files/directories any more.

Ralf

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental Backup / Moved File doesn't get stored in the backup

2010-11-24 Thread Ralf Gross
Paulo Martinez schrieb:
> One thing that i found interesting: the "moved" files respectively
> the old locations are listed in the bacula job file list. Restoring
> corresponding locations shows correct behavior (moved file vanished).
> ...

http://bugs.bacula.org/view.php?id=1651

The developers consider this not a bug...

 
Ralf

--
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula Project will Die

2010-11-05 Thread Ralf Gross
Heitor Medrado de Faria schrieb:
> Guys,
> 
> Each new Bacula Enterprise feature, like the: New GUI Configurator, 
> makes me feel that Bacula project will die.
> It's very frustrating that a project that become a huge success being a 
> free software, is being destroyed like that.
> I acknowledge that Kern and other developers had lots of development 
> work on Bacula - and there is not huge contribution. But creating a paid 
> fork is not the way of get compensation.

why not? And what's your proposal to get more developers to support
bacula and keep the project alive?

Ralf

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book "Blueprint to a 
Billion" shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] broken threading

2010-11-01 Thread Ralf Gross
Dan Langille schrieb:
> Over the past few days, I've become increasingly impatient and 
> frustrated by posts that break threading.  That is, posts that lack the 
> headers necessary for properly threading of emails.  Specifically, the 
> References: and In-Reply-To: headers are not being preserved.
> 
> cases in point, the following threads:
> 
> * Cannot build bacula-client 5.0.3 on FreeBSD
> * Searching for files
> * PLEASE READ BEFORE POSTING
> 
> As can be found here:
> 
>http://marc.info/?l=bacula-users&r=1&b=201010&w=2
> 
> Thanks for the rant.  :)


The only way to stop this would be blocking all mails from the
froum2mailinglist gateway backupcentral.com.

http://backupcentral.com/component/mailman2/

It's the same situation on the backuppc list...

Ralf

--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Email Confirmation

2010-10-27 Thread Ralf Gross
Mark Gordon schrieb:
> Hello all I'm using Bacula 5.02 and find it very useful. However is
> there a way to include in the email notification that the job completed
> ok to list all the files that were sent to the storage. Ex when under
> ./bconsole
> 
> list files jobid=316 | more

With Accurate Backups enabled, the 'list files jobid=xxx' command will
not give you the real list of backuped files anymore. See bug report
0001651.

http://bugs.bacula.org/ view.php?id=1651

Ralf

--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disabled Jobs Question

2010-10-07 Thread Ralf Gross
Phil Stracchino schrieb:
> On 10/06/10 14:35, Mingus Dew wrote:
> > John,
> >  I think I had to create a bogus schedule, that bacula wouldn't
> > accept the job config without a schedule. I think I'll disable the job
> > in bconsole and try to start it remotely. Just see what happens...
> 
> Mingus, this is why I always create an empty schedule named "NEVER".  To
> disable automatic run of any job, long-term, without deleting the Job, I
> then simply set its Schedule to NEVER.

bacula doesn't complain if the job resource has no schedule.

Ralf

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error: Could not get Job Volume Parameters to update Bootstrap file. ERR=sql_get.c:433 No volumes found for JobId=xxxx

2010-10-05 Thread Ralf Gross
Ralf Gross schrieb:
> > job with the checksum errors:
> > 
> > 05-Okt 04:05 VUMEM004-sd JobId 26039: Volume "vumem008-inc-0655" previously 
> > written, moving to end of data.
> > 05-Okt 04:05 VUMEM004-sd JobId 26039: Ready to append to end of Volume 
> > "vumem008-inc-0655" size=4764691550
> > 05-Okt 04:05 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
> > /bin/gunzip
> 
> ok, the checksum errors are gone after changing the Accurate Fileset
> parameters.
> 
> #accurate= pins5
> accurate= mcs

nope

I hit bug 

0001612: Bacula does not save a checksum for hard links during full
backup, but expects one during accurate differential backups

http://bugs.bacula.org/view.php?id=1612

The problem was discussed earlier this month on the list.

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.general/60799/focus=60922


Ralf

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Backups

2010-10-05 Thread Ralf Gross
Martin Simmons schrieb:
> >>>>> On Tue, 5 Oct 2010 10:13:59 +0200, Ralf Gross said:
> > 
> > All the information about the job tells me that 3 files and a directory were
> > backed up. But the size of the backup (4,478,194,813 bytes) does not fit 
> > then.
> > 
> > I find it hard to check if accurate backups are working as they should.
> > Especially the bls output below is confusing...
> > ...
> > $ bls -V  "vumem008-inc-0663|vumem008-inc-0665" VUMEM008-DISK | grep 
> > postgres-backups
> > 
> > bls JobId 26052: -rw-r--r--   1 ntp  ssl-cert  4478194813 2010-10-04 
> > 20:18:43  /postgres-backups/testdb-20101004.sql.gz
> > bls JobId 26052: drwxr-xr-x   7 ntp  ssl-cert4096 2010-10-04 
> > 19:10:40  /postgres-backups/
> > bls JobId 26052: --   - --- -- 
> >   /postgres-backups/testdb-20100928.sql.gz
> > bls JobId 26052: --   - --- -- 
> >   /postgres-backups/testdb-20100929.sql.gz
> > bls JobId 0: drwxr-xr-x   7 ntp  ssl-cert4096 2010-10-04 
> > 19:10:40  /postgres-backups/
> > bls JobId 0: --   - --- --  
> >  /postgres-backups/testdb-20100928.sql.gz
> > bls JobId 0: --   - --- --  
> >  /postgres-backups/testdb-20100929.sql.gz
> 
> The files with all the hyphens are deletion markers created by the accurate
> backup code.  They tell Bacula that the file was in the previous backup but
> not in this backup.

So the files were not backed up and they are not in the volume. But
the job output, the query and list files tells the opposit.

This may be a better question for the developer list, but why is that?

Ralf

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Backups

2010-10-05 Thread Ralf Gross
Ralf Gross schrieb:
> Hi,
> 
> after updateing to 5.0.3 it checked the state of some accurate backups. I'm 
> not
> sure if I fully understand why bacula is backing up some files.
> 
> I'm interested in the job VUMEM008-psql-dumps.
> 
> 
> Terminated Jobs:
>  JobId  LevelFiles  Bytes   Status   FinishedName 
> ==
>  25963  Incr  34.449 G  OK   02-Oct-10 05:31 
> VUMEM008-psql-dumps
>  25970  Volu635 0   OK   02-Oct-10 11:00 VerifyVUMEM008
>  25971  Volu  3 0   OK   02-Oct-10 11:01 
> VerifyVUMEM008-psql-dumps
>  26008  Full104,4926.834 G  OK   04-Oct-10 04:23 VUMEM008
>  26009  Full 2526.62 G  OK   04-Oct-10 05:37 
> VUMEM008-psql-dumps
>  26014  Volu104,492 0   OK   04-Oct-10 11:33 VerifyVUMEM008
>  26015  Volu 25 0   OK   04-Oct-10 11:38 
> VerifyVUMEM008-psql-dumps
>  26026  Incr  422.28 G  OK   04-Oct-10 21:38 
> VUMEM008-psql-dumps
>  26039  Incr 16,4608.614 G  OK   05-Oct-10 04:15 VUMEM008
> 
> 
> There has been a full backup on 2010-10-04 and and incremental later that 
> same day.
> 
> 
> content of the full backup:
> 
> *list files jobid=26009
> ++
> | filename
>  |
> ++
> | /postgres-backups/testdb-20101001.sql.gz
>  |
> | /postgres-backups/WAL/archive_status/   
>  |
> | /postgres-backups/WAL/  
>  |
> | /postgres-backups/mysql-backups/monthly/
>  |
> | 
> /postgres-backups/mysql-backups/daily/information_schema/information_schema_2009-01-26_16h34m.Montag.sql.gz
>  |
> | /postgres-backups/mysql-backups/daily/information_schema/   
>  |
> | 
> /postgres-backups/mysql-backups/daily/mantis/mantis_2009-01-26_16h34m.Montag.sql.gz
>   |
> | /postgres-backups/mysql-backups/daily/mantis/   
>  |
> | 
> /postgres-backups/mysql-backups/daily/mysql/mysql_2009-01-26_16h34m.Montag.sql.gz
> |
> | /postgres-backups/mysql-backups/daily/mysql/
>  |
> | /postgres-backups/mysql-backups/daily/  
>  |
> | /postgres-backups/mysql-backups/weekly/information_schema/  
>  |
> | /postgres-backups/mysql-backups/weekly/mantis/  
>  |
> | /postgres-backups/mysql-backups/weekly/mysql/   
>  |
> | /postgres-backups/mysql-backups/weekly/ 
>  |
> | /postgres-backups/testdb-20100928.sql.gz
>  |
> | /postgres-backups/testdb-20101002.sql.gz
>  |
> | /postgres-backups/testdb-20100930.sql.gz
>  |
> | /postgres-backups/lost+found/   
>  |
> | /postgres-backups/testdb-20100929.sql.gz
>  |
> | /postgres-backups/all-20101003.sql.gz   
>  |
> | /postgres-backups/xlogs/
>  |
> | /postgres-backups/basebackups/  
>  |
> | /postgres-backups/  
>  |
> | /postgres-backups/mysql-backups/
>  |
> ++
> ++-+-+--+---+--++---+
> | jobid  | name| s

Re: [Bacula-users] Error: Could not get Job Volume Parameters to update Bootstrap file. ERR=sql_get.c:433 No volumes found for JobId=xxxx

2010-10-05 Thread Ralf Gross
Ralf Gross schrieb:
> 
> it seems that I messed something up during the upgrade from 3.0.3 to
> 5.0.3 (Debian Lenny, AMD64, psql 8.4).
> 
> There is one job that repeatedly ends with error.
> 
> ERR=sql_get.c:433 No volumes found for JobId=

I purged the 2 volume, changed the Accurate paramters (see below) and
ran the backups again. Now the volume error is gone. But I still don't
know why some files that were also in the last full backup get backed
up again (see my other mail).

*list files jobid=26051
+--+
| filename |
+--+
| /postgres-backups/testdb-20101004.sql.gz |
| /postgres-backups/   |
| /postgres-backups/testdb-20100928.sql.gz |
| /postgres-backups/testdb-20100929.sql.gz |
+--+
++-+-+--+---+--+---+---+
| jobid  | name| starttime   | type | level | jobfiles 
| jobbytes  | jobstatus |
++-+-+--+---+--+---+---+
| 26,051 | VUMEM008-psql-dumps | 2010-10-05 09:32:34 | B| I |4 
| 4,478,194,813 | T |
++-+-+--+---+--+---+---+

 
 
> job with the checksum errors:
> 
> 05-Okt 04:05 VUMEM004-sd JobId 26039: Volume "vumem008-inc-0655" previously 
> written, moving to end of data.
> 05-Okt 04:05 VUMEM004-sd JobId 26039: Ready to append to end of Volume 
> "vumem008-inc-0655" size=4764691550
> 05-Okt 04:05 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
> /bin/gunzip

ok, the checksum errors are gone after changing the Accurate Fileset
parameters.

#accurate= pins5
accurate= mcs

I guess pins5 was not used in 3.0.3 (and also not complaint about) and
now after upgrade to 5.0.3 the problem started because the checksum
was not yet in the db.

The manual is not clear about the changes of the accurate parameters.

3.0.x
| 
http://www.bacula.org/3.0.x-manuals/en/install/install/Configuring_Director.html#SECTION0067
|
| accurate=options
| The options letters specified are used when running a Backup
| Level=Incremental/Differential in Accurate mode. The options letters
| are the same than in the verify= option below. 

5.0.x
| 
http://www.bacula.org/5.0.x-manuals/en/main/main/New_Features_in_5_0_0.html#SECTION0055
| In previous versions, the accurate code used the file creation and
| modification times to determine if a file was modified or not. 


Ralf

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error: Could not get Job Volume Parameters to update Bootstrap file. ERR=sql_get.c:433 No volumes found for JobId=xxxx

2010-10-05 Thread Ralf Gross
Hi,

it seems that I messed something up during the upgrade from 3.0.3 to
5.0.3 (Debian Lenny, AMD64, psql 8.4).

There is one job that repeatedly ends with error.

ERR=sql_get.c:433 No volumes found for JobId=

I'm not sure what the real problem is. Volume vumem008-inc-0663 is there and
was written at the time of the backup job.


$ls -l vumem008-inc-0663
-rw-r- 1 bacula bacula 1231688129  5. Okt 08:42 vumem008-inc-0663


There is also information about the volume in the DB.

Enter Volume name: vumem008-inc-0663
++--+-+--+---++---++
| jobid  | name | starttime   | type | level | files  | bytes   
  | status |
++--+-+--+---++---++
| 26,039 | VUMEM008 | 2010-10-05 04:05:03 | B| I | 16,460 | 
8,614,235,909 | T  |
++--+-+--+---++---++



Job 26039 was a different job of this client (system files) which ended with
a checksum errors.

I'm not sure if there is a problem with the DB or something completely
unrelated to the upgrade.



job with the volume error:

05-Okt 08:38 VUMEM004-dir JobId 26046: Start Backup JobId 26046, 
Job=VUMEM008-psql-dumps.2010-10-05_08.38.25_53
05-Okt 08:38 VUMEM004-dir JobId 26046: Using Device "VUMEM008-DISK"
05-Okt 08:38 VUMEM004-dir JobId 26046: Sending Accurate information.
05-Okt 08:38 VUMEM004-sd JobId 26046: Volume "vumem008-inc-0663" previously 
written, moving to end of data.
05-Okt 08:38 VUMEM004-sd JobId 26046: Ready to append to end of Volume 
"vumem008-inc-0663" size=1231687637
05-Okt 08:42 VUMEM004-sd JobId 26046: Job write elapsed time = 00:04:19, 
Transfer rate = 0  Bytes/second
05-Okt 08:42 VUMEM004-dir JobId 26046: Error: Could not get Job Volume 
Parameters to update Bootstrap file. ERR=sql_get.c:433 No volumes found for 
JobId=26046

05-Okt 08:42 VUMEM004-dir JobId 26046: Error: sql_get.c:377 No volumes found 
for JobId=26046
05-Okt 08:42 VUMEM004-dir JobId 26046: Bacula VUMEM004-dir 5.0.3 (04Aug10): 
05-Okt-2010 08:42:46
  Build OS:   x86_64-pc-linux-gnu debian 5.0.5
  JobId:  26046
  Job:VUMEM008-psql-dumps.2010-10-05_08.38.25_53
  Backup Level:   Incremental, since=2010-10-05 05:30:03
  Client: "VUMEM008-fd" 5.0.2 (28Apr10) 
i486-pc-linux-gnu,debian,5.0.4
  FileSet:"VUMEM008-psql-dumps" 2010-01-05 05:30:00
  Pool:   "VUMEM008-Disk-Incremental" (From Job IncPool 
override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"VUMEM008-DISK" (From Pool resource)
  Scheduled time: 05-Okt-2010 08:38:06
  Start time: 05-Okt-2010 08:38:27
  End time:   05-Okt-2010 08:42:46
  Elapsed time:   4 mins 19 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   22,283,895,422 (22.28 GB)
  SD Bytes Written:   0 (0 B)
  Rate:   86038.2 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   yes
  Volume name(s): 
  Volume Session Id:  27
  Volume Session Time:1286190654
  Last Volume Bytes:  1,231,688,129 (1.231 GB)
  Non-fatal FD errors:2
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK -- with warnings

05-Okt 08:42 VUMEM004-dir JobId 26046: Begin pruning Jobs older than 1 month .
05-Okt 08:42 VUMEM004-dir JobId 26046: No Jobs found to prune.
05-Okt 08:42 VUMEM004-dir JobId 26046: Begin pruning Jobs.
05-Okt 08:42 VUMEM004-dir JobId 26046: No Files found to prune.
05-Okt 08:42 VUMEM004-dir JobId 26046: End auto prune.


job with the checksum errors:

05-Okt 04:05 VUMEM004-dir JobId 26039: Start Backup JobId 26039, 
Job=VUMEM008.2010-10-05_04.05.00_36
05-Okt 04:05 VUMEM004-dir JobId 26039: Using Device "VUMEM008-DISK"
05-Okt 04:05 VUMEM004-dir JobId 26039: Sending Accurate information.
05-Okt 04:05 VUMEM004-sd JobId 26039: Volume "vumem008-inc-0655" previously 
written, moving to end of data.
05-Okt 04:05 VUMEM004-sd JobId 26039: Ready to append to end of Volume 
"vumem008-inc-0655" size=4764691550
05-Okt 04:05 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
/bin/gunzip
05-Okt 04:06 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
/usr/share/man/man8/mkfs.ext4.8.gz
05-Okt 04:06 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
/usr/share/man/man8/fsck.ext4dev.8.gz
[]
05-Okt 04:09 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
/sbin/fsck.ext3
05-Okt 04:09 VUMEM008-fd JobId 26039: Warning: Can't verify checksum for 
/sbin/tune2fs
05-Okt 04:09 VUMEM004-sd JobId 26039: User defined maximum volume capacity 
5,000,000,000 exceeded on device "VUMEM008-DISK" 
(/data/bacul

[Bacula-users] Accurate Backups

2010-10-04 Thread Ralf Gross
Hi,

after updateing to 5.0.3 it checked the state of some accurate backups. I'm not
sure if I fully understand why bacula is backing up some files.

I'm interested in the job VUMEM008-psql-dumps.


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName 
==
 25963  Incr  34.449 G  OK   02-Oct-10 05:31 VUMEM008-psql-dumps
 25970  Volu635 0   OK   02-Oct-10 11:00 VerifyVUMEM008
 25971  Volu  3 0   OK   02-Oct-10 11:01 
VerifyVUMEM008-psql-dumps
 26008  Full104,4926.834 G  OK   04-Oct-10 04:23 VUMEM008
 26009  Full 2526.62 G  OK   04-Oct-10 05:37 VUMEM008-psql-dumps
 26014  Volu104,492 0   OK   04-Oct-10 11:33 VerifyVUMEM008
 26015  Volu 25 0   OK   04-Oct-10 11:38 
VerifyVUMEM008-psql-dumps
 26026  Incr  422.28 G  OK   04-Oct-10 21:38 VUMEM008-psql-dumps
 26039  Incr 16,4608.614 G  OK   05-Oct-10 04:15 VUMEM008


There has been a full backup on 2010-10-04 and and incremental later that same 
day.


content of the full backup:

*list files jobid=26009
++
| filename  
   |
++
| /postgres-backups/testdb-20101001.sql.gz  
   |
| /postgres-backups/WAL/archive_status/ 
   |
| /postgres-backups/WAL/
   |
| /postgres-backups/mysql-backups/monthly/  
   |
| 
/postgres-backups/mysql-backups/daily/information_schema/information_schema_2009-01-26_16h34m.Montag.sql.gz
 |
| /postgres-backups/mysql-backups/daily/information_schema/ 
   |
| 
/postgres-backups/mysql-backups/daily/mantis/mantis_2009-01-26_16h34m.Montag.sql.gz
  |
| /postgres-backups/mysql-backups/daily/mantis/ 
   |
| 
/postgres-backups/mysql-backups/daily/mysql/mysql_2009-01-26_16h34m.Montag.sql.gz
|
| /postgres-backups/mysql-backups/daily/mysql/  
   |
| /postgres-backups/mysql-backups/daily/
   |
| /postgres-backups/mysql-backups/weekly/information_schema/
   |
| /postgres-backups/mysql-backups/weekly/mantis/
   |
| /postgres-backups/mysql-backups/weekly/mysql/ 
   |
| /postgres-backups/mysql-backups/weekly/   
   |
| /postgres-backups/testdb-20100928.sql.gz  
   |
| /postgres-backups/testdb-20101002.sql.gz  
   |
| /postgres-backups/testdb-20100930.sql.gz  
   |
| /postgres-backups/lost+found/ 
   |
| /postgres-backups/testdb-20100929.sql.gz  
   |
| /postgres-backups/all-20101003.sql.gz 
   |
| /postgres-backups/xlogs/  
   |
| /postgres-backups/basebackups/
   |
| /postgres-backups/
   |
| /postgres-backups/mysql-backups/  
   |
++
++-+-+--+---+--++---+
| jobid  | name| starttime   | type | level | jobfiles 
| jobbytes   | jobstatus |
++-+-+--+---+--++---+
| 26,009 | VUMEM008-psql-dumps | 2010-10-04 05:30:02 | B| F |   25 
| 26,627,699,629 | T |
++-+-+--+---+--++---+


content of the incremental job:

*list files jobid=26026
+--+
| filename |
+---

Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
John Drescher schrieb:
> On Mon, Oct 4, 2010 at 3:01 PM, Ralf Gross  wrote:
> > Rory Campbell-Lange schrieb:
> >> On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> >>
> >> All of the indexes are below; you seem to have the correct ones for the 
> >> file table.
> >>
> >> The Debian problems with 5.0.3 were/are related to the upgrade trying to 
> >> create
> >> an index that already exists. See  Bug#591293.
> >>  ...
> >>  public | job_media_firstindex          | index | bacula | jobmedia
> >>  ...
> >>  public | job_media_lastindex           | index | bacula | jobmedia
> >
> > I don't have these two indexes, did you add them?
> >
> 
> Here is what I have on gentoo. And no I did not add any index for many years.
> 
> 
> bacula-# \di
>List of relations
>  Schema | Name  | Type  |  Owner  |Table
> +---+---+-+-
>  public | basefiles_jobid_idx   | index | hbroker | basefiles
>  public | basefiles_pkey| index | hbroker | basefiles
>  public | cdimages_pkey | index | hbroker | cdimages
>  public | client_group_idx  | index | hbroker | client_group
>  public | client_group_member_idx   | index | hbroker | 
> client_group_member
>  public | client_group_member_pkey  | index | hbroker | 
> client_group_member
>  public | client_group_pkey | index | hbroker | client_group
>  public | counters_pkey | index | hbroker | counters
>  public | device_pkey   | index | hbroker | device
>  public | file_filenameid_idx   | index | hbroker | file
>  public | file_jobid_idx| index | hbroker | file
>  public | file_jpfid_idx| index | hbroker | file
>  public | file_pathid_idx   | index | hbroker | file
>  public | file_pkey | index | hbroker | file
>  public | filename_name_idx | index | hbroker | filename
>  public | filename_pkey | index | hbroker | filename
>  public | fileset_name_idx  | index | hbroker | fileset
>  public | fileset_pkey  | index | hbroker | fileset
>  public | job_media_job_id_media_id_idx | index | hbroker | jobmedia
>  public | job_name_idx  | index | hbroker | job
>  public | job_pkey  | index | hbroker | job
>  public | jobhisto_idx  | index | hbroker | jobhisto
>  public | jobmedia_pkey | index | hbroker | jobmedia
>  public | location_pkey | index | hbroker | location
>  public | locationlog_pkey  | index | hbroker | locationlog
>  public | log_name_idx  | index | hbroker | log
>  public | log_pkey  | index | hbroker | log
>  public | media_pkey| index | hbroker | media
>  public | media_volumename_id   | index | hbroker | media
>  public | mediatype_pkey| index | hbroker | mediatype
>  public | path_name_idx | index | hbroker | path
>  public | path_pkey | index | hbroker | path
>  public | pathhierarchy_pkey| index | hbroker | pathhierarchy
>  public | pathhierarchy_ppathid | index | hbroker | pathhierarchy
>  public | pathvisibility_jobid  | index | hbroker | pathvisibility
>  public | pathvisibility_pkey   | index | hbroker | pathvisibility
>  public | pool_name_idx | index | hbroker | pool
>  public | pool_pkey | index | hbroker | pool
>  public | status_pkey   | index | hbroker | status
>  public | storage_pkey  | index | hbroker | storage
>  public | unsavedfiles_pkey | index | hbroker | unsavedfiles
> (41 rows)

Hm, I'm missing some of the indexes:


file_pathid_idx
file_filenameid_idx
client_group_idx
client_group_member_idx
client_group_member_pkey
client_group_pkey



 List of relations
 Schema | Name  | Type  |  Owner   | Table  
+---+---+--+
 public | basefiles_jobid_idx   | index | postgres | basefiles
 public | basefiles_pkey| index | postgres | basefiles
 public | cdimages_pkey | index | postgres | cdimages
 public | client_name_idx   | index | postgres | client
 public | client_pkey   | index | postgres | client
 public | counters_pkey | index | postgres | counters
 public | device_pkey  

Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
Rory Campbell-Lange schrieb:
> On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> 
> All of the indexes are below; you seem to have the correct ones for the file 
> table.
> 
> The Debian problems with 5.0.3 were/are related to the upgrade trying to 
> create
> an index that already exists. See  Bug#591293.
>  ...
>  public | job_media_firstindex  | index | bacula | jobmedia
>  ...
>  public | job_media_lastindex   | index | bacula | jobmedia

I don't have these two indexes, did you add them?

Ralf

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
Hi,

I just updated from 3.0.3 to 5.0.3. I know that there have been problems with
the update_postgresql_tables script. Here are my indexes:


bacula=# select * from pg_indexes where tablename='file';
 schemaname | tablename |   indexname| tablespace | 
 indexdef   
+---+++-
 public | file  | file_pkey  || CREATE UNIQUE INDEX 
file_pkey ON file USING btree (fileid)
 public | file  | file_jobid_idx || CREATE INDEX 
file_jobid_idx ON file USING btree (jobid)
 public | file  | file_jpfid_idx || CREATE INDEX 
file_jpfid_idx ON file USING btree (jobid, pathid, filenameid)
(3 Zeilen)


Can anyone confirm that these indexes are correct. Looking at the manual, they
look ok to me.

http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html#SECTION004591000


Ralf

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Rescue

2010-09-17 Thread Ralf Gross
Heitor Faria schrieb:
> 
> For several months, I cant compile Bacula Disaster recovery cd-rom on 
> Debian 5.
> The make command, return several dependencies and missed paths (e.g. 
> networking).
> Does anyone knows what is happening?


I'm not sure if the Rescue CD is still supported / maintained. I'd try
a linux live CD with bacula-fd included.

Ralf

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.3 backport for debian lenny?

2010-09-17 Thread Ralf Gross
Thomas Mueller schrieb:
> Am Tue, 14 Sep 2010 16:41:04 +0200 schrieb Ralf Gross:
> 
> > Hi,
> > 
> > I'm still using bacula 3.0.3 on debian lenny (amd64). In the next weeks
> > I wanted to move on to 5.0.x. But due to the freeze of the next debian
> > stable release there will be no 5.0.3 in debian for some time. 5.0.3
> > seems to have a lot of importent bug fixes, so I'd like to skip 5.0.2
> > which is available in lenny-backports right now.
> > 
> > Does anyone use an alternative bacula repository for debian lenny?
> > 
> > Ralf
> 
> i've created one for my own use. you can copy source(to compile i386 
> version)/binaries(only amd64) from
> 
> http://chaschperli.ch/debian/lenny (search for 5.0.3)

great, thanks. Now I remember that it was your repo I was looking for
;)

Ralf

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.3 backport for debian lenny?

2010-09-14 Thread Ralf Gross
Sven Hartge schrieb:
> On 14.09.2010 18:02, Rory Campbell-Lange wrote:
> > On 14/09/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> 
> >> Thanks, I know this version from backports, and if there is no other
> >> deb available, I'll give it a try.
> >>
> >> From http://www.bacula.org/en/?page=news (and from the mails I
> >> received from the bacula-bugs list) I had the feeling that 5.0.3 fixed
> >> some important issues.
> > 
> > Sorry, I mis-read your intention to skip 5.0.2 (I read it as "skip to").
> > 
> > You may wish to contact the backports maintainer (rhonda at deb.at) to
> > see if there is a plan to update the backport to 5.0.3. The backports
> > maintaners were very helpful to me when I wished to do a custom
> > packaging of 5.0.2.
> 
> I am not affiliated with backports.debian.org so I cannot speak for
> them, but since there is no 5.0.3 in neither Debian unstable nor Debian
> testing, the backport will not get updated to 5.0.3
> 
> You (Ralf) need to contact the maintainer of Bacula in Debian (John
> Goerzen) and ask if he is willing to package 5.0.3 and willing to ask
> the release managers to let this version migrate to testing.
> 
> Only after all this has happened there will be a backport of 5.0.3.

There is already a bug report (whishlist item) in the debian bts about
5.0.3. But I'm not very confident that a new version will be uploaded
until squeeze is stable.

Ralf 

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.3 backport for debian lenny?

2010-09-14 Thread Ralf Gross
Rory Campbell-Lange schrieb:
> On 14/09/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> > Hi,
> > 
> > I'm still using bacula 3.0.3 on debian lenny (amd64). In the next
> > weeks I wanted to move on to 5.0.x. But due to the freeze of the next
> > debian stable release there will be no 5.0.3 in debian for some time.
> > 5.0.3 seems to have a lot of importent bug fixes, so I'd like to skip
> > 5.0.2 which is available in lenny-backports right now.
> > 
> > Does anyone use an alternative bacula repository for debian lenny?
> 
> We have been using the backports 5.0.2-1 on Stable for a couple of
> months. It works well.
> 
> http://packages.debian.org/lenny-backports/bacula
> http://wiki.debian.org/Backports#Findingbackports


Thanks, I know this version from backports, and if there is no other
deb available, I'll give it a try.

>From http://www.bacula.org/en/?page=news (and from the mails I
received from the bacula-bugs list) I had the feeling that 5.0.3 fixed
some important issues.

Ralf

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 5.0.3 backport for debian lenny?

2010-09-14 Thread Ralf Gross
Hi,

I'm still using bacula 3.0.3 on debian lenny (amd64). In the next
weeks I wanted to move on to 5.0.x. But due to the freeze of the next
debian stable release there will be no 5.0.3 in debian for some time.
5.0.3 seems to have a lot of importent bug fixes, so I'd like to skip
5.0.2 which is available in lenny-backports right now.

Does anyone use an alternative bacula repository for debian lenny?

Ralf

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole command to display jobs on volume

2010-07-28 Thread Ralf Gross
Lamp Zy schrieb:
> 
> I'm using bacula-5.0.2.
> 
> Is there a way in bconsole to see which jobs used a particular tape?
> 
> I can run "list jobmedia", save the output to a file and then grep for 
> the media name but it's a lot of steps and it shows only the jobid.

bconsole -> query

13: List Jobs stored on a selected MediaId
14: List Jobs stored for a given Volume name

I'm not sure if the standard queries are still there in 5.0.x, I
remember there was a change.

http://www.bacula.org/5.0.x-manuals/en/main/main/New_Features_in_5_0_0.html#SECTION005181000


Ralf

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share 
of $1 Million in cash or HP Products. Visit us here for more details:
http://ad.doubleclick.net/clk;226879339;13503038;l?
http://clk.atdmt.com/CRS/go/247765532/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using HP Storageworks 1/8 G2 with Bacula

2010-06-14 Thread Ralf Gross
Alan Brown schrieb:
> On Sat, 12 Jun 2010, Daniel Bareiro wrote:
> 
> > This card uses the module 'cciss' which has been loaded by the kernel:
> 
> AFAIK cciss only supports disks.

Hm, no it should work with tapes.

http://www.kernel.org/doc/Documentation/blockdev/cciss.txt

- quote -
SCSI sequential access devices and medium changer devices are
supported and appropriate device nodes are automatically created.
(e.g.  /dev/st0, /dev/st1, etc.  See the "st" man page for more
details.) 


Maybe the chapter "SCSI tape drive and medium changer support" can help
the OP.

Ralf


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using HP Storageworks 1/8 G2 with Bacula

2010-06-13 Thread Ralf Gross
Daniel Bareiro schrieb:
> ...
> Now that I see, this has nothing to do with the tape drive:
> 
> -
> backup:~# smartctl -i /dev/sda
> smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce
> Allen
> Home page is http://smartmontools.sourceforge.net/
> 
> === START OF INFORMATION SECTION ===
> Model Family: Seagate Barracuda 7200.9 family
> Device Model: ST3160211AS
> Serial Number:6PT56S0F
> Firmware Version: 3.AAE
> User Capacity:160,041,885,696 bytes
> Device is:In smartctl database [for details use: -P show]
> ATA Version is:   7
> ATA Standard is:  Exact ATA specification draft version not indicated
> Local Time is:Sat Jun 12 14:49:31 2010 ART
> SMART support is: Available - device has SMART capability.
> SMART support is: Disabled
> 
> SMART Disabled. Use option -s with argument 'on' to enable it.
> -
> 
> It is the SATA disk, so this is consistent with that I didn't find the
> special file for the tape drive. Also, if it were the tape drive, type
> should be 'Sequential-Access'.


What does lsscsi show? Anything about the tape drive / changer  in
dmesg?

Ralf

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] low despooling speeds when parallelism is increased

2010-06-02 Thread Ralf Gross
Athanasios Douitsis schrieb:
> Tuning it down to 3+3 (from the original 6+6) seems to alleviate but not
> solve the problem completely. I totaly agree with your suggestion, from
> this point onwards it seems to be a case of finding a spooling setup
> with enough I/O oomph. 
> 
> One question however, in your setup you have limited your concurrent
> jobs to 3 as in 3 jobs overall or as in 3 jobs per drive (9 overall)? 
 
3 jobs overall.

We don't have too many jobs, but the jobs that are running are rather
large (7-10 TB).

Ralf

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] low despooling speeds when parallelism is increased

2010-06-02 Thread Ralf Gross
Athanasios Douitsis schrieb:
> 
> Hi everyone,
> 
> Our setup consists of a Dell 2950 server (PERC6i) w/ FreeBSD7 and two HP
> Ultrium LTO4 drives installed in a twin Quantum autochanger enclosure. 
> Our bacula version is 5.0.0 (which is the current FreeBSD port version). 
> 
> Here is our problem:
> 
> When running a single job, our setup is able to consistently surpass
> 70Mbytes/sec (or even 80) on despooling, which should be reasonably
> enough.  Unfortunately, when running several jobs on both drives (for
> example 6+6 parallel jobs) our despooling speeds drop to about
> 20Mbytes/sec or even less. The speed of the jobs to finish last
> naturally ramp up, especially for the very last. Our hypothesis is that
> the spooling area cannot handle simultaneous reading (from jobs that are
> still tranfering) and writing (from the currently despooling job) too
> well, hence the performance loss. 
> 
> So far we were using a common spool area for both drives on these two
> test setups:
> 
> 1)A spool area on a Clarion CX4 Fibre Channel array (4Gbps) w/ 2x10Krpm
> disks on a raid0 configuration. 
> 2)A 2xSCSI320 raid0 striped configuration in the server itself (via the
> PERC6i controller).
> 
> Both setups yielded similarly poor results.
> 
> Our thoughts/questions:
> 
> -Should we use a separate spool area for each drive? 
> -Anyone else that has had problems with the despooling speed being too
> low? What are your proposed solutions, if any?
> 
> I realize this is not strictly a bacula question, however the matter
> should be of interest for any bacula admin out there. I understand that
> under like 40Mbytes/sec the drive constantly starts and stops, a
> process which  is detrimental to its expected lifetime (and the tape's
> as well). 

I've had the same problem. First I started with a simple 2 Disk RAID1
as spool area for our 3 LTO-4 drives. With more jobs running in
parallel this was simply not enought. In the end I put 6 WD Raptor
SATA drives in the server, configured a large RAID10 and limited the
number of concurrent jobs to 3. Now I get ~100 MB/s for each of the 3
jobs.

If you really need to run so many jobs concurrently put as many fast
disks in the server as possible and configure them as RAID10. Maybe
use SSD's instead of SATA/SAS Disk?

Ralf

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-4 Drive issues

2010-05-24 Thread Ralf Gross
skipunk schrieb:
> 
> I am aware that more nic's do not increase throughput. Basically a
> backup server has to be in place tonight and I really don't want to
> start from scratch and now reaching for anything that will resolve
> the issue within the next few hours.

1. test the lto drive with tar/dd. I think you already did that and
the performance was ok.

2. test the I/O throughput of your data server, the server that you
want to backup. Use a tool like bonnie++, tiobench...

3. test the network performance between your data server (file daemon)
and your bacula-sd server with tools like netperf, netio...


Nothing right now sounds like a bacula problem, you have first to find
the bottleneck, which could be the network or somthing else.

Ralf

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-4 Drive issues

2010-05-22 Thread Ralf Gross
skipunk schrieb:
> 
> Hoping someone could help me out.  My department recently purchased
> a Dell PowerVault TL2000 autochanger connected via SAS5.
> 
> We are upgrading from a spectralogic AIT4 system.
> 
> The sad part is the LTO-4 drive is running much slower than the AIT
> system.  I'm avg around 714k/s. I have been reading where others
> complain about only 35 - 50 M/s instead of 120+. At this point I'd
> love to 35 - 50 M/s.
> 
> I'm running Fedora 12 & Bacula 5.0.2
> This is all on a new server. The AIT system is running on Fedora 10 & Bacula 
> 5.0.0
> 
> All the config files are similar if not identical accept for
> hardware settings. I just don't seem to understand why this is so
> slow. Any suggestions would be appreciated.

Did you test the drive with tar, dd or other system tools? Did you
test the drive with btape? Anything in the kernel log file?

http://www.bacula.org/de/dev-manual/Testing_Your_Tape_Drive.html

I'd begin with the basic tests with dd and tar. Create a 5 GB large
file with random data (/dev/urandom), put the file on a fast disk and
write it to tape with dd.

Everything below ~40 MB/s will do harm to your drive and tapes,
because the drive will start show-shining. 

http://en.wikipedia.org/wiki/Tape_drive#Problems

Thus you should use spooling for backups jobs that can't deliver
this minimum data rate.


Ralf

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu Lucid

2010-04-27 Thread Ralf Gross
Uwe Schuerkamp schrieb:
> Hi folks,
> 
> looks like lucid comes with bacula v5 which seemingly is unable to
> talk to our server running bacula 2.4.x: 
> 
> bacula-fd -c /etc/bacula/bacula-fd.conf -d 99 -f 
> bacula-fd: filed_conf.c:452-0 Inserting director res: lotus-mon
> lotus-fd: filed.c:275-0 filed: listening on port 9102
> lotus-fd: cram-md5.c:73-0 send: auth cram-md5
> <1993776720.1272363...@lotus-fd> ssl=0
> lotus-fd: cram-md5.c:152-0 sending resp to challenge:
> XXX
> 
> 
> following that, there's a "client rejected hello command" message on
> the server. 
> 
> Is there any way to turn on 2.x "client compatibility" in a 5.0.1
> client? 

nope

client version must be <= server version

Ralf

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using removable SATA disks as media

2010-04-21 Thread Ralf Gross
Phil Stracchino schrieb:
> > The /usr slice is about 4.8G, and shouldn't be changing-- at least
> > not at the tune of 51M every night.
> 
> 
> "Shouldn't" is a powerful word.  You might want to test the theory by
> doing something like this:
> 
> find /usr /opt/bacula/bin -mtime -1 -ls
> 
> to list all files that have been modified in the last 24 hours (or
> -mtime -2 to check the last 48 hours) and see what's changing.  Another
> alternative would be to start up BAT, go to Jobs Run, select job 183,
> right-click, and select "List Files On Job"; Bacula will tell you
> precisely which files were backed up.  (You can accomplish the same
> thing using bconsole, but it'll require a manual SQL query.)  Then you
> can look at the specific files that you know were backed up, and
> determine why they got backed up.

'list files jobid=xxx' in bconsole will also show all backed up files
for a jobid without a custom sql query.

Ralf

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] heartbeat problems

2010-04-21 Thread Ralf Gross
Spencer Marshall schrieb:
> 
> Has anyone else experienced problems with "Packet size too big" in
> 5.0.1?  I was running a Level=VolumeToCatalog at the time.  I found
> bug an old bug http://bugs.bacula.org/view.php?id=1061 which
> appeared to have the same problems.  I removed the "Heartbeat
> Interval" from the fd, and it ran to successful completion.  When I
> put it back in, it failed with the same error.

I reported the bug initially, but I haven't seen this problem lately.
Mainly because I don't use the Heartbeat Interval anymore.

If this is with 5.0.1 then I would add a note to the old bug report
that it's still an issue. Kern wrote about the fix "It will be in
version 3.0", which seems not to be true.

Ralf

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems getting restore to work

2010-04-13 Thread Ralf Gross
Jerry Lowry schrieb:
> Martin,  I am trying to restore the files to the file system on the  
> bacula server.  The client 'swift-fd' definitely does NOT have room on  
> the disk to restore all the pdfs.  That is why my restore is configured  
> with -> "where= /backup0/bacula-restores".
>
> No,
> jlowry:swift 61>ls /home/hardware/backup0
> /home/hardware/backup0: No such file or directory
>
> When it tries the restore it fails to create the directory structure on  
> the backup server.  This is based on the error message that I get.
>
> 12-Apr 13:54 swift-fd JobId 137: Error: restore.c:1133 Write error on 
> /backup0/bacula-restores/home/hardware/pdf/rca/sarah.tv.pdf: No space left on 
> device

your are trying to restore to the client swift-fd, I guess this is not
what you want. You have to change this in your restore settings.

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to Backup a bacula director with another bacula

2010-04-13 Thread Ralf Gross
Bellucci Srl3 Bellucci Srl schrieb:
> "13-Apr 12:19 bckam101-dir JobId 46775: Fatal error: Unable to authenticate 
> with File daemon at "bckam102:9102". Possible causes: Passwords or names not 
> the same or
> 
> Maximum Concurrent Jobs exceeded on the FD or FD networking messed up 
> (restart daemon).
> 
> Please see 
> http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00376
>  for help.
> 
> 13-Apr 12:19 bckam102-fd: Fatal Error at authenticate.c:143 because:
> 
> Incorrect password given by Director at client.
> ...

The passwords do not match. There is nothing different in backing up a
bacula server or a other client. The password must match. One thing to
take care of is the database that has to be backed up on the
bacula-dir. So you have to dump the db before backup.

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] verify old volume checksums

2010-03-16 Thread Ralf Gross
Brock Palen schrieb:
> Is there  a way to verify a volumes internal checksums against its data?
> 
> Even better is there a way to have the catalog compare its checksums  
> against the files in volumes?  I know about the verify job but that  
> appears to be only for the most recent job run.  I would like to do  
> this as a type of 'verify that a full backup is possible'.

No, this is no possible right now. But there is already a feature
request.

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] what strategy for backing up files only once?

2010-03-01 Thread Ralf Gross
Gavin McCullagh schrieb:
> Hi,
> 
> On Fri, 26 Feb 2010, Ralf Gross wrote:
> 
> > I'm still thinking if it would be possible to use bacula for backing
> > up xxx TB of data, instead of a more expensive solution with LAN-less
> > backups and snapshots.
> > 
> > Problem is the time window and bandwith.
> 
> VirtualFull backups be a partial solution to your problem.   We have a
> laptop which we get very short backup time windows for -- never enough time
> to run a full backup.  Instead, we run incrementals (takes about 20% of the
> time) and then run virtualfull backups to consolidate them.  We never need
> to run real full backups.

Hm, I've to look into VirtualFull backups. But given that a single
file set will be ~10 TB, I'll still need the disk space twice as big (
I think).

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] what strategy for backing up files only once?

2010-02-26 Thread Ralf Gross
Howdy!

I'm still thinking if it would be possible to use bacula for backing
up xxx TB of data, instead of a more expensive solution with LAN-less
backups and snapshots.

Problem is the time window and bandwith.

If there would be something like a incremental forever feature in
bacula the problem could be solved. I know the Accurate Backup feature
but without deduplication (don't backup/store the same file more than
once) it's possible that the space needed for backups will grow (user
moves or renames a directory with 10 TB of data...) Accurate Backup
will detect this and back up the data again (instead of just pointing
to the new location inside the database).

The new Base Job feature doesn't seem to help too. It's a begin, but I
don't see how it could help here. It's more for large amounts of
clients with the same files at the same place.

What I need is a delta copy with file deduplication.

Am I missing something or is this just not possible right now?

Anyone backing up 100..200... TB data? (we back up some server with
10-15 TB filesets, but a full backup takes nearly a week with verify).

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't restore a file. Might have an issue with my mysql DB.

2010-02-23 Thread Ralf Gross
Arno Lehmann schrieb:
> Hello,
> 
> 23.02.2010 14:05, jch2os wrote:
> > So I'm trying to restore a file.  Here is what I'm doing.
> ...
> > You are now entering file selection mode where you add (mark) and
> > remove (unmark) files to be restored. No files are initially added, unless
> > you used the "all" keyword on the command line.
> > Enter "done" to leave this mode.
> > 
> > cwd is: /
> > $
> > 
> > 0 Files
> 
> Ever tried the 'ls' or 'dir' command here, at the $ prompt?

I doubt that this will help because

11 Jobs, 0 files inserted into the tree.

I guess the file retention is shorter than the job retention..

Ralf
~   

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] list volumes comes blank

2010-02-23 Thread Ralf Gross
Bostjan Müller schrieb:
> 
> I have been testing our storage system and after the tests I deleted the
> databases and created them anew. After the new startup the Pools came out
> blank - there are no volumes in them. I ran the label barcodes command and
> it found all the tapes, but it did not add them to any of the volumes.
> update pool command did not change anything (no volumes visible in the list
> volumes list).

You wiped the database where the information about the pools and
volumes was stored. I'm a bit confused that you still see the pools.

What messages did the label command show? If the volumes were labled
before it should tell you that there is already a label on the volume.

> Is there anything more I need to do to add the volumes to the Pool?

You can use the 'add' command in bconsole:

http://www.bacula.org/de/dev-manual/Bacula_Console.html#SECTION00139

Or overwrite the labels on each volume:

http://www.bacula.org/2.4.x-manuals/en/main/Bacula_Freque_Asked_Questi.html#SECTION00378


Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent spooling and despooling

2010-02-17 Thread Ralf Gross
Arno Lehmann schrieb:
> > A better way in my opinion is to used a spool sized ring buffer in 
> > memory rather then a disk based spool. The consumer would only start 
> > after the producer had put a large set amount in it and continued until 
> > drained the buffer.
> 
> Sure... those approaches are not exactly new, and they don't sound too 
> complicated. However, to discuss this, -users is not the best place. 
> If you want to propose an implementation, you should do that on -devel 
> - I'm sure Kern would like to see actual patches ;-)
 
I recently started a discussion about this on -devel.

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/15351

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re store OK but no files are restored

2010-02-12 Thread Ralf Gross
Glynd schrieb:
> 
> Here is the ls -l on the ubuntu server where the bacula dir runs
> 
> r...@mistral:/home/cirrus/mailarchiva_dist# ls -l /bacula-restores/
> total 0
> 
> There is also the same directory on the Windows 7 glyn-laptop. and when I
> look in there the directory structure is there, but in a strange way. the
> path is:
> c:\bacula-restores\c\users\glyn\My Documents\restored files
> 
> So the restore is working but when I have done this previously, the files
> have been restored to the bacula server (ubuntu) /bacula-restores.

If you choose glyn-laptop as restore client (like you did) then it
will be restored there.

 
> Bottom line, you have helped me find the files and proven that restore
> works. Many thanks

fine :)

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re store OK but no files are restored

2010-02-12 Thread Ralf Gross
glynd schrieb:
> 1 file selected to be restored.
> 
> Run Restore job
> JobName: RestoreFiles
> Bootstrap:   /var/lib/bacula/bin/working/mistral-dir.restore.9.bsr
> Where:   /bacula-restores
> Replace: always
> FileSet: Sugar Set
> Backup Client:   glyn-laptop-fd
> Restore Client:  glyn-laptop-fd
> Storage: HDDA
> When:2010-02-12 12:38:22
> Catalog: MyCatalog
> Priority:10
> Plugin Options:  *None*
> OK to run? (yes/mod/no):

Can you post the output of 'ls -l /bacula-restores' on glyn-laptop-fd?

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re store OK but no files are restored

2010-02-12 Thread Ralf Gross
Glynd schrieb:
> 
> The "where" argument I left at default /bacula-restores and that is where I
> looked.

Can you post all the options you choose for the restore (cut & paste)?

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re store OK but no files are restored

2010-02-12 Thread Ralf Gross
Glynd schrieb:
> 
> I am running Bacula 3.0.2 on Ubuntu server Apache2 MySQL
> 
> The backups seem to be working OK so I thought I had better test a restore!
> This also seemed to run OK, but there were no files restored. I tried
> different "where" locations but still no joy.
> The log snip below shows all is well too. Any ideas please?
> 
> 12-Feb 08:37 mistral-dir JobId 836: Start Restore Job
> RestoreFiles.2010-02-12_08.37.15_35
> 12-Feb 08:37 mistral-dir JobId 836: Using Device "usb-drive-a"
> 12-Feb 08:37 mistral-sd JobId 836: Ready to read from volume
> "hdda-full-0038" on device "usb-drive-a" (/mnt/bupa/bup).
> 12-Feb 08:37 mistral-sd JobId 836: Forward spacing Volume "hdda-full-0038"
> to file:block 0:36195.
> 12-Feb 08:37 mistral-dir JobId 836: Bacula mistral-dir 3.0.2 (18Jul09):
> 12-Feb-2010 08:37:55
>   Build OS:   i686-pc-linux-gnu ubuntu 9.04
>   JobId:  836
>   Job:RestoreFiles.2010-02-12_08.37.15_35
>   Restore Client: glyn-laptop-fd


And you looked on glyn-laptop-fd for the files? Where did you say
should the files be restored to (location)?

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: <8F>~<8D>

2010-02-11 Thread Ralf Gross
Ralf Gross schrieb:
> Ralf Gross schrieb:
> > follow up
> > 
> > Cacti shows that swap started growing this morning and reached it's
> > maximum when the job failed...
> 
> Restarted the job yesterday evening. Now after 24h the SD seems to eat up
> memory again.
> 
> 
> Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 93.0%id,  6.9%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:   4063148k total,  1646136k used,  2417012k free,20292k buffers
> Swap:  1951856k total,89640k used,  1862216k free,   135628k cached
> 
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND  
>   
> 10914 bacula20   0 1522m 1.3g 1480 S1 33.9 976:53.24 bacula-sd  
> 
> 
> I guess the server will start swapping again this night and the job will fail
> again.


errr, forgot the status storage  output.

VUMEM004-sd Version: 3.0.3 (18 October 2009) x86_64-pc-linux-gnu debian 4.0
Daemon started 10-Feb-10 19:10, 22 Jobs run since started.
 Heap: heap=1,067,560,960 smbytes=1,525,348,181 max_bytes=1,525,348,181 
bufs=114,747 max_bufs=114,748
Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: <8F>~<8D>

2010-02-11 Thread Ralf Gross
Ralf Gross schrieb:
> follow up
> 
> Cacti shows that swap started growing this morning and reached it's
> maximum when the job failed...

Restarted the job yesterday evening. Now after 24h the SD seems to eat up
memory again.


Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 93.0%id,  6.9%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4063148k total,  1646136k used,  2417012k free,20292k buffers
Swap:  1951856k total,89640k used,  1862216k free,   135628k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

10914 bacula20   0 1522m 1.3g 1480 S1 33.9 976:53.24 bacula-sd  


I guess the server will start swapping again this night and the job will fail
again.

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: <8F>~<8D>

2010-02-10 Thread Ralf Gross
follow up

Cacti shows that swap started growing this morning and reached it's
maximum when the job failed...


Ralf Gross schrieb:
> Hi,
> 
> bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4
> 
> The backup job 19429 was running for nearly two days and then failed while
> changing the LTO3 tape. The job failed two times now. No messages in syslog.
> 
> The message "ERR=Datei oder Verzeichnis nicht gefunden" means "ERR=file or
> directory not found"
> 
> I deleted the 06D132L3 tape in the bacula catalog, erased the bacula label 
> with
> mt and labled it again. No problem while loading/unloading or writing the 
> lable.
> 
> The strage thing is that this error blocked an other job (19427) that was
> running at the same time but on a different storage daemon! The other SD just
> stoped writing. The status was still running but no activity on the SD side. 
> 
> Any ideas?
> 
> [...]
> 10-Feb 16:17 VU0EA003-sd JobId 19427: Despooling elapsed time = 00:03:33, 
> Transfer rate = 100.8 M bytes/second
> 10-Feb 16:17 VU0EA003-sd JobId 19427: Spooling dat ...
> 10-Feb 16:18 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:05:49, 
> Transfer rate = 61.53 M bytes/second
> 10-Feb 16:18 VUMEM004-sd JobId 19429: Spooling data again ...
> 10-Feb 16:26 VUMEM004-sd JobId 19429: User specified spool size reached.
> 10-Feb 16:26 VUMEM004-sd JobId 19429: Writing spooled data to Volume. 
> Despooling 21,474,877,357 bytes ...
> 10-Feb 16:31 VUMEM004-sd JobId 19429: End of Volume "06D142L3" at 575:9470 on 
> device "LTO3" (/dev/ULTRIUM-TD3). Write of 64512 bytes got -1.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: Re-read of last block succeeded.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: End of medium on Volume "06D142L3" 
> Bytes=575,574,128,640 Blocks=8,921,969 at 10-Feb-2010 16:31.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3307 Issuing autochanger "unload slot 
> 23, drive 0" command.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3995 Bad autochanger "unload slot 23, 
> drive 0": ERR=Datei oder Verzeichnis nicht gefunden
> Results=
> 10-Feb 16:31 VUMEM004-dir JobId 19429: There are no more Jobs associated with 
> Volume "06D132L3". Marking it purged.
> 10-Feb 16:31 VUMEM004-dir JobId 19429: All records pruned from Volume 
> "06D132L3"; marking it "Purged"
> 10-Feb 16:31 VUMEM004-dir JobId 19429: Recycled volume "06D132L3"
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger "loaded? drive 
> 0" command.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger "loaded? drive 0" 
> command: ERR=Datei oder Verzeichnis nicht gefunden.
> Results=
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger "loaded? drive 
> 0" command.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger "loaded? drive 0" 
> command: ERR=Datei oder Verzeichnis nicht gefunden.
> Results=
> 10-Feb 16:31 VUMEM004-sd JobId 19429: 3304 Issuing autochanger "load slot 13, 
> drive 0" command.
> 10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: 3992 Bad autochanger "load 
> slot 13, drive 0": ERR=Datei oder Verzeichnis nicht gefunden.
> Results=
> 10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: spool.c:302 Fatal append 
> error on device "LTO3" (/dev/ULTRIUM-TD3): ERR=
> 10-Feb 16:31 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:04:37, 
> Transfer rate = 77.52 M bytes/second
> 10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:177 FD command 
> not found: <8F>~<8D>_<96>ee<90>5l^\=<9A>
> ESC<8E>^\z<8C>`<9D>R`=#I^X?q+^N̖cۈ#w^C+kg^KU{<9B>uLj<9E><96>^^>a
> <84>^^m^^^Qy<99>)N<9F>^C9sc3kX<9F>I+O\š%
>   ^U^R<^KJ^G3:̢˨<81>̿NJ
> ^V^^P<9D>^H<83>A^Gtٴrw젼RD^Z^?r҃^Z^H^Mu96><8E>\<99>^?x`14wQ,:骘:6
> ^?^XQ4^]<86>>^Y^\^]<8F>^Eq^G<8C><86>@cESCo^As1fi{<8D>v^A<90>)-<8C>^V^C
>  
> ^AE^F<87>]^V<9E>jb2$6`ӅV<94><9F>H"<82>5<93>]C0X^L<8A><98>\ѵ<90>h"|<8F>/^OD^UG5>^D{
> ~GZk<80><92>o^L<93><8D>hd>D-<8C>^]<8B>3^O<82>^_^\U<88><88>^\^S&f<98><9D>S#
> <99>):11^_^Ol
> 2'^_
> <8C>nx<8E>
> ^UC<8A>^G^BM}PZp9^Tr]<8A>
> AS<85>j/G^W<84>C!...@t^m^u<80>]o^D}*^T%<87>M^VydX~t^O^ACIG^Zf1\3H<8E>}yސIMϱyQpU؟Цky)K1^KdἮ`G<9C>v
> 10-Feb 16:32 VUMEM004-sd JobId 19429: Job write elapsed time = 47:21:16, 
> Transfer rate = 19.25 M bytes/s

[Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: <8F>~<8D>

2010-02-10 Thread Ralf Gross
Hi,

bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4

The backup job 19429 was running for nearly two days and then failed while
changing the LTO3 tape. The job failed two times now. No messages in syslog.

The message "ERR=Datei oder Verzeichnis nicht gefunden" means "ERR=file or
directory not found"

I deleted the 06D132L3 tape in the bacula catalog, erased the bacula label with
mt and labled it again. No problem while loading/unloading or writing the lable.

The strage thing is that this error blocked an other job (19427) that was
running at the same time but on a different storage daemon! The other SD just
stoped writing. The status was still running but no activity on the SD side. 

Any ideas?

[...]
10-Feb 16:17 VU0EA003-sd JobId 19427: Despooling elapsed time = 00:03:33, 
Transfer rate = 100.8 M bytes/second
10-Feb 16:17 VU0EA003-sd JobId 19427: Spooling dat ...
10-Feb 16:18 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:05:49, 
Transfer rate = 61.53 M bytes/second
10-Feb 16:18 VUMEM004-sd JobId 19429: Spooling data again ...
10-Feb 16:26 VUMEM004-sd JobId 19429: User specified spool size reached.
10-Feb 16:26 VUMEM004-sd JobId 19429: Writing spooled data to Volume. 
Despooling 21,474,877,357 bytes ...
10-Feb 16:31 VUMEM004-sd JobId 19429: End of Volume "06D142L3" at 575:9470 on 
device "LTO3" (/dev/ULTRIUM-TD3). Write of 64512 bytes got -1.
10-Feb 16:31 VUMEM004-sd JobId 19429: Re-read of last block succeeded.
10-Feb 16:31 VUMEM004-sd JobId 19429: End of medium on Volume "06D142L3" 
Bytes=575,574,128,640 Blocks=8,921,969 at 10-Feb-2010 16:31.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3307 Issuing autochanger "unload slot 23, 
drive 0" command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3995 Bad autochanger "unload slot 23, 
drive 0": ERR=Datei oder Verzeichnis nicht gefunden
Results=
10-Feb 16:31 VUMEM004-dir JobId 19429: There are no more Jobs associated with 
Volume "06D132L3". Marking it purged.
10-Feb 16:31 VUMEM004-dir JobId 19429: All records pruned from Volume 
"06D132L3"; marking it "Purged"
10-Feb 16:31 VUMEM004-dir JobId 19429: Recycled volume "06D132L3"
10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger "loaded? drive 
0" command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger "loaded? drive 0" 
command: ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger "loaded? drive 
0" command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger "loaded? drive 0" 
command: ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: 3304 Issuing autochanger "load slot 13, 
drive 0" command.
10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: 3992 Bad autochanger "load 
slot 13, drive 0": ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: spool.c:302 Fatal append 
error on device "LTO3" (/dev/ULTRIUM-TD3): ERR=
10-Feb 16:31 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:04:37, 
Transfer rate = 77.52 M bytes/second
10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:177 FD command not 
found: <8F>~<8D>_<96>ee<90>5l^\=<9A>
ESC<8E>^\z<8C>`<9D>R`=#I^X?q+^N̖cۈ#w^C+kg^KU{<9B>uLj<9E><96>^^>a
<84>^^m^^^Qy<99>)N<9F>^C9sc3kX<9F>I+O\š%
  ^U^R<^KJ^G3:̢˨<81>̿NJ
^V^^P<9D>^H<83>A^Gtٴrw젼RD^Z^?r҃^Z^H^Mu96><8E>\<99>^?x`14wQ,:骘:6
^?^XQ4^]<86>>^Y^\^]<8F>^Eq^G<8C><86>@cESCo^As1fi{<8D>v^A<90>)-<8C>^V^C
 
^AE^F<87>]^V<9E>jb2$6`ӅV<94><9F>H"<82>5<93>]C0X^L<8A><98>\ѵ<90>h"|<8F>/^OD^UG5>^D{
~GZk<80><92>o^L<93><8D>hd>D-<8C>^]<8B>3^O<82>^_^\U<88><88>^\^S&f<98><9D>S#
<99>):11^_^Ol
2'^_
<8C>nx<8E>
^UC<8A>^G^BM}PZp9^Tr]<8A>
AS<85>j/G^W<84>C!...@t^m^u<80>]o^D}*^T%<87>M^VydX~t^O^ACIG^Zf1\3H<8E>}yސIMϱyQpU؟Цky)K1^KdἮ`G<9C>v
10-Feb 16:32 VUMEM004-sd JobId 19429: Job write elapsed time = 47:21:16, 
Transfer rate = 19.25 M bytes/second
10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:166 Command error 
with FD, hanging up. Append data error.

10-Feb 16:32 VU0EM003 JobId 19429: Fatal error: backup.c:892 Network send error 
to SD. ERR=Connection reset by peer
10-Feb 16:32 VUMEM004-dir JobId 19429: Error: Bacula VUMEM004-dir 3.0.3 
(18Oct09): 10-Feb-2010 16:32:20
  Build OS:   x86_64-pc-linux-gnu debian 4.0
  JobId:  19429
  Job:VU0EM003.2010-02-08_16.26.19_08
  Backup Level:   Full
  Client: "VU0EM003" 2.4.4 (28Dec08) 
x86_64-pc-linux-gnu,debian,4.0
  FileSet:"VU0EM003" 2007-06-12 00:05:01
  Pool:   "Full" (From Job FullPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"NEC-T40A_B-Net" (From user selection)
  Scheduled time: 08-Feb-2010 16:25:56
  Start time: 08-Feb-2010 16:56:49
  End time:   10-Feb-2010 16:32:20
  Elapsed time:   1 day 23 hours 35 mins 31 secs
  Priority:   10
  FD Files Written:   6,645,663
  SD Files Writte

Re: [Bacula-users] ClientTroubleShootHelp

2010-01-31 Thread Ralf Gross
Tommy schrieb:
> New to bacula
> 
> Ubuntu9.10 (Karmic..)
> bacula 2.4.4
> mysql
> 
> Director runs on machine dell.xxx.xxx as dell-dir
>   dell-fd and dell-sd test backups run fine
> Client (5.0.x) runs on thinky.xxx.xxx as thinky-fd

You can't use 5.0 bacula-fd with < 5.0 bacula-dir.

Ralf

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Different Retention Periods for same Client

2010-01-20 Thread Ralf Gross
CoolAtt NNA schrieb:
> 
> Hi All...
> 
> I want to backup a client with 2 different filesets each with different file 
> retention period.
> please help me how do i proceed.
> 
> I have the following in bacula-dir.conf :
> 
> Client {
>   Name = mypc
>   Address = 10.0.0.45
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "abc123" # password for FileDaemon 2
>   File Retention = 30 days# 30 days
>   Job Retention = 6 months# six months
>   AutoPrune = yes # Prune expired Jobs/Files
> }
> 
> As can be seen above   File Retention & Job Retention has been set in the 
> client.
> So i want the above client to also have a different retention for a different 
> File Set.

You have to add the client with a different name and retention time
to the config.

Ralf

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to avoid backing up restored files?

2010-01-14 Thread Ralf Gross
Steve Costaras schrieb:
> 
> 
> Some history:
> 
> On 01/14/2010 16:24, Dan Langille wrote:
> > Steve Costaras wrote:
> >> On 01/14/2010 15:59, Dan Langille wrote:
> >>> Steve Costaras wrote:
>  I see the mtimeonly flag in the fileset options but there are many
>  caveats about using it as you will miss other files that may have been
>  copied over that have retained mtimes from before the last backup.
>  Since bacula does an MD5/SHA1 hash of all files I assumed (wrongly it
>  seems) that it would be smart enough to not back up files that it
>  already had backed up and are on tape.
> >>> Smart enough?  Sheesh.  ;)
> >>>
> >>> That hash is to ensure the file is restored properly.  And for 
> >>> verfication.  To do what you want is not easy.
> >> :)  Well I figured it would be relatively easy as an option (since 
> >> the hash is in the database and when a file is read from disk for 
> >> backup (since it's planning on backing it up anyway it would need to 
> >> read the file, if the file name & hash match those that are in the 
> >> database the file could be skipped.   (for 'real' completeness and to 
> >> keep in line w/ the accurate option perhaps update the database with 
> >> permissions et al on the inode but since the content matches that 
> >> would save a lot of tapes).
> >
> > In the database... not on the client.  That's the issue.  Let us not 
> > discuss this here.  It is not a trivial problem to do correctly.
> 
> I see your point, the fd would need to get this data (or send the hash 
> of the file to the director) 1) after reading and calculating the entire 
> file and 2) before sending it to the director to save not only time but 
> network bandwidth, not to mention 3) having need to buffer the file data 
> in memory or to do another file system look up and re-read of the same 
> data which if done could cause more of a window for a race condition 
> unless handled properly if the file was modified between the two reads 
> and when the data was stored in the catalogue.
> 
> That would be assuming you wanted to save both network bandwith and 
> tape.   Otherwise if the fd acted normally and the decision was made by 
> the director?  Does the FD talk directly to the SD or does it need to go 
> through the director as well?

You might want to look into the new Accurate Backup feature.

http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html#SECTION0031

But I think it won't help you here. What be more interesting,
especially with your amount of data is the upcoming Base Job feature.

http://sourceforge.net/apps/wordpress/bacula/2009/09/30/new-basejob-feature/

Ralf

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Can 2.x clients talk to a 3.0.3 server?

2010-01-14 Thread Ralf Gross
Uwe Schuerkamp schrieb:
> Hi folks,
> 
> I just set up the first 3.0.3 bacula server (compiled from source on
> CentOS 5.4) in our environment and was wondering wether 2.x clients
> cann still talk to a 3.x server version? I cannot test this right now
> without going through major changes because the new server is not at
> its "final destination", read: our hosting facility yet.

http://www.bacula.org/en/?page=news

quote:
Compatibility:
As always, both the Director and Storage daemon must be upgraded at
the same time.

Older 3.0.x and possibly 2.4.x File Daemons are compatible with the
3.0.3 Director and Storage daemons. There should be no need to upgrade
older File Daemons.


Ralf

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] archive support?

2010-01-11 Thread Ralf Gross
Thomas Wakefield schrieb:
> Take a directory, dump it to tape, and it will live forever (roughly
> 5-10 years) on tape.  And the copy on disk will be deleted.  But if
> needed, we could pull the copy back from tape.  We could possibly
> write 2 copies to tape for redundancy.
> 
> I already use bacula to protect over 100TB of spinning disk.  But i
> have multiple TB of data that my users "might" want to use again,
> but most likely they don't need it.

We have the same problems here. Large sets of data that might never be
touched again. To backup this, I setup a second client entry for each
of the server with a different retention time (30y). After an archive
was backed up (with a dump of the DB) to tape I change the status of
the last tape from append to used and put all tapes in a safe.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client Rejects Hello Command

2010-01-11 Thread Ralf Gross
jk04 schrieb:
> 
> Debian Server: bacula-server-2.4.4-1
> Fedora Client: bacula-client-3.0.3-5
> 
> When I check the client's status from the server the following message 
> is displayed:
> 
> Fatal error: File daemon at "192.168.0.55:9102" rejected Hello command
> 
> I believe this is because the server and the client software are not 
> version-compatible. However, my attempts to install an older version of 
> bacula (bacula-client-2.4.4-1.fc9.i386.rpm) on the client fail due to 
> dependencies.
> 
> Anyone can think of a suggestion?

You won't get a newer client working with an older director. If you
are using Debian Lenny, you could try the bacula 3.0.2 backport.

http://packages.debian.org/lenny-backports/bacula

Or build your own package from sources.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-07 Thread Ralf Gross
Thomas Mueller schrieb:
> 
> >> > With
> >> > 
> >> > Maximum File Size = 5G
> >> > Maximum Block Size = 262144
> >> > Maximum Network Buffer Size = 262144
> >> > 
> >> > I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
> >> > Size gave me some extra MB/s, I think it's as important as the
> >> > Maximum Block Size.
> >> > 
> >> > 
> >> thanks for providing this hints. just searching why my lto-4 is writing
> >> just at 40mb/s. will try them out!
> >> 
> >> searching the "Maximum File Size" in the manual I found this:
> >> 
> >> If you are configuring an LTO-3 or LTO-4 tape, you probably will want
> >> to set the Maximum File Size to 2GB to avoid making the drive stop to
> >> write an EOF mark.
> >> 
> >> maybe this is the reason for the "extra mb/s".
> > 
> > Modifying the Maximum Block Size to more than 262144 didn't change much
> > here. But changing the File Size did. Much.
> 
> I found a post from Kern saying that Quantum told him, that about 262144 
> is the best blocksize - increasing it would increase error rate too. 
> 
> > 
> > Anyway, 40 MB/s seems a bit low, even with the defaults. Before tuning
> > our setup I got ~75 MB/s. Are you spooling the data to disk or writing
> > directly to tape?
> 
> yes, i was surprised too that it is that slow. 
> 
> I'm spooling to disk first (2x 1TB disk as RAID0, dedicated to bacula for 
> spooling). i will also start a sequential read test to check if the disks 
> are the bottleneck. The slow job was the only one running.
> 
> watching iotop i saw the "maximum file size" problem: it stops writing 
> after 1 GB (default file size) and writes to the DB and then continues 
> writing. so for a LTO-4 it stops nearly 800 times until the tape is full. 

That's strange. Bacula shouldn't stop writing, if the Max File Size is
reached it writes a EOF marker and then continuous writing to the next
file. Have you activated attribute spooling? Bacula should then only
write to the DB at the end of a job.

I've never used iotop but with dstat I see a more or less constant
stream from the spool disks to the tape drive. 

If this is an HP drive, I would start with the HP LTT tool and a Media
and Drive Assessment Test. There is also an performance test. This
helped me several times to identify bad drives or tapes.

http://h18000.www1.hp.com/products/storageworks/ltt/index.html

There are only RPMs for linux, but with alien you can generate a deb
package for debian if you need. I run the tests in CLF mode (command
line).



Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Ralf Gross
Thomas Mueller schrieb:
> 
> >> > I tried to do that years ago but I believe this made all tapes that
> >> > were already written to unreadable (and I now have 80) so I gave this
> >> > up. With my 5+ year old dual processor Opteron 248 server I get
> >> > 25MB/s to 45MB/s despools (which measures the actual tape rate) for
> >> > my LTO2 drives. The reason for the wide range seems to be
> >> > compression.
> >> 
> >> Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
> >> with Maximum Block Size and a shiny new tape. It looks like 1M is too
> >> much for my tape drive, but 512K seems to work and it's making a huge
> >> difference: btape fill reports > 60 MB/s right at the beginning, then
> >> drops to abour 52 MB/s.
> > 
> > With
> > 
> > Maximum File Size = 5G
> > Maximum Block Size = 262144
> > Maximum Network Buffer Size = 262144
> > 
> > I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
> > Size gave me some extra MB/s, I think it's as important as the Maximum
> > Block Size.
> > 
> 
> thanks for providing this hints. just searching why my lto-4 is writing 
> just at 40mb/s. will try them out!
> 
> searching the "Maximum File Size" in the manual I found this:
> 
> If you are configuring an LTO-3 or LTO-4 tape, you probably will want to 
> set the Maximum File Size to 2GB to avoid making the drive stop to write 
> an EOF mark. 
> 
> maybe this is the reason for the "extra mb/s". 

Modifying the Maximum Block Size to more than 262144 didn't change
much here. But changing the File Size did. Much.

Anyway, 40 MB/s seems a bit low, even with the defaults. Before tuning
our setup I got ~75 MB/s. Are you spooling the data to disk or writing
directly to tape?

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Ralf Gross
Tino Schwarze schrieb:
> On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
> 
> > >> I'm not seeing anywhere close to 60M/s ( < 30 ).  I think I just fixed
> > >> that.  I increased the block size to 1M, and that seemed to really
> > >> increase the throughput, in the test I just did.  I will see tomorrow,
> > >> when it all runs.
> > >
> > > Yes, if you aren't already, whenever writing to tape you should almost
> > > without exception (and certainly on any modern tape drive) be using the
> > > largest block size that btape says your drive supports.
> > 
> > I tried to do that years ago but I believe this made all tapes that
> > were already written to unreadable (and I now have 80) so I gave this
> > up. With my 5+ year old dual processor Opteron 248 server I get 25MB/s
> > to 45MB/s despools (which measures the actual tape rate) for my LTO2
> > drives. The reason for the wide range seems to be compression.
> 
> Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
> with Maximum Block Size and a shiny new tape. It looks like 1M is too
> much for my tape drive, but 512K seems to work and it's making a huge
> difference: btape fill reports > 60 MB/s right at the beginning, then
> drops to abour 52 MB/s.

With

Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144

I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
Size gave me some extra MB/s, I think it's as important as the Maximum
Block Size.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup/Delete/Restore & subsequent backups

2010-01-04 Thread Ralf Gross
Steve Costaras schrieb:
>
> I've been diving into Bacula the past 2-3 weeks to come up with a backup  
> system here for some small server count but very large data store sizes  
> (30+TiB per server).
>
> In the coarse of my testing I have noticed something and want to know if  
> it's by design (in which case it would be very wasteful of tapes); a  
> misconfiguration on my part (possible as I said I've only been playing  
> with it for the past 2-3 weeks), or a bug.
>
> Ok, what I'm seeing is that I do a full backup of the client machine,  
> that runs well;  I then delete an entire subdirectory say  
> /var/ftp/{DIRA} which has say 10,000 files or about 1TiB or data.   I  
> then do a restore of that directory from tape (last backup was a full).   
> Now I am seeing that the next incremental I do has the full  
> /var/ftp/{DIRA} being backed up again as if it were all new files,  
> likewise a differential will also back up this directory in full again  
> as well.In my mind at least since this directory was in the full  
> backup and I have accurate mode on, the backup system should KNOW that  
> the files are already on tape (the full backup that was used to do the  
> restore) and should only back up /NEW/ files added to that directory  
> since that last backup not the entire directory structure after a 
> restore.
>
> Can anyone comment?

Did you run a backup when all the files in /var/ftp/{DIRA} were
deleted? Or was the next incremental run after all files were restored
again?


Maybe you are looking for the upcoming Basejob feature?

http://sourceforge.net/apps/wordpress/bacula/2009/09/30/new-basejob-feature/

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with concurrent jobs on same Device resource

2009-12-21 Thread Ralf Gross
Martin Reissner schrieb:
> Hello,
> 
> once again I have a problem with running concurrent jobs on my bacula
> setup. I'm using Version: 3.0.3 (18 October 2009) on DIR and SD and all
> data goes to a single SD where multiple Device Resources are configured
> (RAID-6 Harddisk Storage). Running concurrent jobs on different Device
> Resources works fine but I can't get concurrent jobs running on one
> Device Resource no matter if the Jobs use the same or different Pools.
> 
> I have set the Maximum Concurrent Jobs Option in
> Director {..} in bacula-dir.conf
> Storage {..} in bacula-dir.conf
> Client {..} in bacula-dir.conf
> Job {..} in bacula-dir.conf
> 
> Storage {..} in bacula-sd.conf
> 
> FileDaemon {..} in bacula-fd.conf
> 
> When I start two Jobs that use the same Device Resource the second Job
> is listed as "... is waiting on Storage Mystore".
> 
> On here
> (http://www.bacula.org/3.0.x-manuals/en/install/install/Storage_Daemon_Configuratio.html)
> I found that you can set a "Maximum Concurrent Jobs" option in the
> Device Resource in bacula-sd.conf but when I add that option I get the
> following error.
> 
> Starting Bacula Storage daemon: 21-Dec 11:22 bacula-sd: ERROR
> TERMINATION at parse_conf.c:954
> Config error: Keyword "MaximumConcurrentJobs" not permitted in this
> resource.
> Perhaps you left the trailing brace off of the previous resource.
> : line 135, col 26 of file /etc/bacula/bacula-sd.conf
>   Maximum Concurrent Jobs = 4;
> 
> 
> So, is it even possible to run concurrent jobs on one Storage Device and
> if so what am i missing out?

I think this was added just to the 3.1 code

http://www.bacula.org/3.1.x-manuals/en/main/main/New_Features_in_3_1_4.html#SECTION0032

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Jobs

2009-12-21 Thread Ralf Gross
Gabriel - IP Guys schrieb:
> 
> I want to purge all failed jobs from bacula, before I synchronised my
> disk which I keep the backups on.
> 
> If I purge a job, will that job just be removed from the index, or will
> the data actually be removed from the volumes?
> [...]

the job will just be removed from the database. The volumes will only
be overwritten by bacula when they are recycled.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbie question about volumes and label command

2009-12-19 Thread Ralf Gross
Tom Epperly schrieb:
> Ralf Gross wrote:
>> Tom Epperly schrieb:
>>   
>>> I don't understand why bacula seems to automatically create volumes 
>>> for  some pools, but forces me to use the "label" command for others. 
>>> I've  tried increasing the "Maximum Volumes" setting, but it doesn't 
>>> seem to help.
>>> 
>>
>>
>> Have set 'LabelMedia = yes' in the Device config for FileStorage?
>>   
> Here is the excerpt from bacula-sd.conf:
>
> Device {
>  Name = FileStorage
>  Media Type = File
>  Archive Device = /home/bacula
>  LabelMedia = yes;   # lets Bacula label unlabeled media  
>  Random Access = Yes;
>  AutomaticMount = yes;   # when device opened, read it
>  RemovableMedia = no;
>  AlwaysOpen = no;
> }
>
> Does this explain it?

yes, this looks good. Have you checked what John suggested, the max.
number of volumes?

Can you post the output of `list media pool=Windows-Full-Pool'?

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbie question about volumes and label command

2009-12-19 Thread Ralf Gross
Tom Epperly schrieb:
> I don't understand why bacula seems to automatically create volumes for  
> some pools, but forces me to use the "label" command for others. I've  
> tried increasing the "Maximum Volumes" setting, but it doesn't seem to 
> help.


Have set 'LabelMedia = yes' in the Device config for FileStorage?

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal append error on device - Command error with FD

2009-12-08 Thread Ralf Gross
thebuzz schrieb:
> 
> Im having a big problem with an Windows 2000 old ASP server that I try to 
> backup via Bacula
> 
> FD on Windows server is 3.0.2 and server is 3.0.2
> 
> I get an 3Gb backup every time - but when I do an bconsole command 
> list files jobid=17
> it shows 
> 
> *list files jobid=17
> No results to list.
> +---++-+--+---+--+--+---+
> | JobId | Name   | StartTime   | Type | Level | JobFiles | JobBytes | 
> JobStatus |
> +---++-+--+---+--+--+---+
> |17 | aspjob | 2009-11-30 23:06:47 | B| F |0 |0 | 
> E |
> +---++-+--+---+--+--+---+
> 
> 
> This is the backup email log sent to us:
> 
> 01-Dec 23:06 kopi3.domain.dk-dir JobId 44: Start Backup JobId 44, 
> Job=aspjob.2009-12-01_23.05.00_05 01-Dec 23:06 kopi3.domain.dk-dir JobId 
> 44: Max configured use duration exceeded. Marking Volume "Vol0014" as 
> Used.
> 02-Dec 01:22 kopi3.domain.dk-dir JobId 44: Fatal error: Network 
> error with FD during Backup: ERR=Operation timed out 02-Dec 01:22 
> kopi3.domain.dk-sd JobId 44: Job aspjob.2009-12-01_23.05.00_05 marked to 
> be canceled.


it's a bit hard to read your mail with all the extra characters in
there...

But looking at the start and the end time, it looks like the job
failed after 2 hours. So this might be a network timeout/firewall
problem. If this is true have a look at the heartbeat option that can
be set in different places in your bacula config.

Ralf


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] RFC: backing up hundreds of TB

2009-11-30 Thread Ralf Gross
Kern Sibbald schrieb:
> > [...]
> > Anyone else here with the same problem? Anyone (maybe Kern or Eric)
> > here that can tell if one of the upcoming new bacula features (dedup?)
> > could help to solve the problem with the massive amount of tapes
> > needed and the growing time windows and bandwidth for backups?
> 
> Last week Eric and I visited a customer that has the following in just 1 of 5 
> datacenters:
> - 10TB/day backed up
> - 800+ Jobs/day
> - 125TB total data backed up
> - Growing rapidly
> - No tape backup (a bit unusual)


Can you write a bit more about their backup2disk setup? They must have
many very large raid arrays witch they then use as bacula sd-devices.

 
> It does take some big iron to run such operations, but Bacula can and is 
> handling it. Obviously it takes professional configuration and tuning of the 
> hardware, systems, and Bacula.

No doubt. 

 
> We are discussing various ways of doing deduplication, better Windows backup, 
> better and faster bare metal recovery, better reporting, archival storage, 
> multiple datacenters coordination, ... as part of development plans for 2010.

Accurate Backup together with Dedup would be a great feature (Bacula
already used the database for Accurate Backup, so adding a dedup
feature seems not too far away).

 
> The above is only one example.  There are a lot of such sites out there, some 
> we know about, and others that are doing their own thing with Bacula.  Many 
> want significant funded development, which will provide opportunities for 
> Bacula programmers (if anyone is a Windows expert programmer, please contact 
> me).  
> 
> Since we (Bacula Systems) are an full Open Source company, this will also add 
> a nice number of new features and capabilities for Bacula, since all code 
> created for funded development goes into the project.
> 
> Good luck,
> 
> Kern
> 
> PS: tip -- a key to reducing time windows is to ensure you have the right 
> bandwidth into the SD and that your catalog is properly tuned (probably 
> running PostgreSQL).


I'm happy with the performance of bacula and our LTO4 drives. A single
job is able to write with 130 MB/s to tape ;)

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] RFC: backing up hundreds of TB

2009-11-28 Thread Ralf Gross
Arno Lehmann schrieb:
> 27.11.2009 13:23, Ralf Gross wrote:
> > [crosspost to -users and -devel list]
> > 
> > Hi,
> > 
> > we are happily using bacula since a few years and already backing up
> > some dozens of TB (large video files) to tape.
> > 
> > In the next 2-3 years the amount of data will be growing to 300+ TB.
> > We are looking for some very pricy solutions for the primary storage
> > at the moment (NetApp etc). But we (I) are also looking if it is
> > possible to go on with the way we store the data right now. Which is
> > just some large raid arrays and backup2tape.
> 
> Good luck... while I agree that SAN/NAS appliances tend to look 
> expensive, they've got their advantages when your space has to grow to 
> really big sizes. Managing only one setup, when some physical disk 
> arrays work together is one of these advantages.

I fully agree. But this comes with a price that is 5-10 time higher
than a setup with simple RAID arrays and a large changer. In the end
I'll present 2 or 3 concepts and others will decide how valuable the
data is.

 
> Also, if you're running a number of big RAID arrays, reliable vendor 
> support is surely beneficial.
> 
> > I've no problem with the primary storage and 10 very big raid boxes
> > (high availability is not needed).  What frightens me is to backup all
> > the data. Most files will be written once and maybe never been
> > accessed again.
> 
> Yearly full backups, and then only incrementals (using accurate backup 
> mode) should be a usable approach. Depending on how often you expect 
> to need recovery, you may want your boss to spend some money on a 
> bigger tape library :-) to make sure most data can be restored without 
> manually loading tapes.

A big 500 slot lib ist already part of the idea.

 
> > But the data need to be online and there is a
> > requirement for backup and the ability to restore deleted files
> > (retention time can be different, going from 6 months to a couple of
> > years).
> 
> 6 months retention time and actually pruning data would be easier with 
> full backups more often than one year, probably.
> 
> I think you should start by defining how long you want to keep your 
> data, how to do full backups when those jobs will surely run longer 
> than your regular backup windows (either splitting the jobs into 
> smaller parts, or making sure you can run backups over a 
> non-production network; measuring impact of backups on other file 
> system accesses).

Some of the data will only be for a couple of months on the filer,
some for a couple of years. The filer(s) won't be that busy, and there
is a dedicated LAN for backups. 


> > The snapshot feature of some commerical products is a very nice
> > feature for taking backups and it's a huge benefit that only the
> > deltas are stored.
> 
> You can build on snapshot capability of SAN filers with Bacula. You'll 
> still get normal file backups, but that's an advantage IMO... the most 
> useful aspect of those snapshots is that you get a consistent stae of 
> the file system, and don't affect production access more than necessary.
> 
> > Backup2tape means that with the classic Full/Diff/Incr setup we'll
> > need many tapes. Even if the data on the primary storage won't change. 
> 
> Sure - a backup is reliable if you've got at least two copies of your 
> files, so for 300 TB, you'll need some tapes. But tapes are probably 
> cheaper than the required disk capacity for a NetApp filer :-)

Compared to a NetApp filer, tapes are definitly cheaper. Using cheaper
raid arrays it might be a bit different. 

 
> > Backup2disk has the disadvantage that a corrupt filesystem (been
> > there, seen that...) can ruin TB's of backed up data. And we will need
> > file storage that is much bigger than the primary storage (keeping x
> > versions of a file).
> 
> Yup. Tapes get cheaper at that volume, especially since they don't 
> need power when stored.
> 
> > 
> > Anyone else here with the same problem? Anyone (maybe Kern or Eric)
> > here that can tell if one of the upcoming new bacula features (dedup?)
> > could help to solve the problem with the massive amount of tapes
> > needed and the growing time windows and bandwidth for backups?
> 
> Well, I'm not Kern or Eric - you've got Kerns feedback - but 
> deduplication using a deduping virtual tape library (dVTL I'll call 
> it) might be one way to go. Unfortunately, these things are, as far as 
> I know, slow compared to real tape drives. And more expensive than a 
> big RAID array for use with Bacula.

Yup, I had a loo

Re: [Bacula-users] RFC: backing up hundreds of TB

2009-11-28 Thread Ralf Gross
Alan Brown schrieb:
> On Fri, 27 Nov 2009, Ralf Gross wrote:
> 
> > Most files will be written once and maybe never been accessed again. But
> > the data need to be online and there is a requirement for backup and the
> > ability to restore deleted files (retention time can be different, going
> > from 6 months to a couple of years).
> 
> Why not use WORM for final versions? One of the biggest problems with
> video archives has always been recovering old material. Many networks
> had a policy of wiping and reusing videotapes in the 1970s, which has led
> to a lot of irreplaceable material being lost forever. It'd be good if
> history didn't repeat...

The data is not really valuable for next generations. It's needed
during a development cycle (2-3 years) and in some cases as long as the
product will be sold.

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RFC: backing up hundreds of TB

2009-11-28 Thread Ralf Gross
Kevin Keane schrieb:
> Just a thought... If I understand you correctly, the files never
> change once they are created? In that case, your best bet might be
> to use a copy-based scheme for backup.


Yes, the files won't change. They are mostly raw camera data. They
will be read again, but not changed.

 
> I'd create two directories on your main server.
> 
> /current /to_backup
> 
> Both on the same file system.
> 
> /current (or whatever you want to call it) holds all the files.
> /to_backup holds hard links to only those files in current that
> haven't been backed up yet. You can use a script to identify those
> current files, for instance by time stamp, a checksum or the like.

Hm, wouldn't the Accurate Backup feature do the same and be less error
prone? 

 
> The actual backup can be done several different ways. You could use
> bacula. Or you could simply use cp, scp, rsync or similar. After the
> backup has completed, you delete the links in /to_backup.
> 
> That way, you only back up files that have changed, and still have a
> full set of files on the backup.

I'm not sure how this will work in case of a restore. In general I
don't like such "hacks" ;) It could be an additional cause of erorrs
and it might be hard to check if the backup is really working.

But thanks for the thoughts, maybe we really have to use a set of
scripts in addition to bacula.

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RFC: backing up hundreds of TB

2009-11-27 Thread Ralf Gross
[crosspost to -users and -devel list]

Hi,

we are happily using bacula since a few years and already backing up
some dozens of TB (large video files) to tape.

In the next 2-3 years the amount of data will be growing to 300+ TB.
We are looking for some very pricy solutions for the primary storage
at the moment (NetApp etc). But we (I) are also looking if it is
possible to go on with the way we store the data right now. Which is
just some large raid arrays and backup2tape.

I've no problem with the primary storage and 10 very big raid boxes
(high availability is not needed).  What frightens me is to backup all
the data. Most files will be written once and maybe never been
accessed again. But the data need to be online and there is a
requirement for backup and the ability to restore deleted files
(retention time can be different, going from 6 months to a couple of
years).

The snapshot feature of some commerical products is a very nice
feature for taking backups and it's a huge benefit that only the
deltas are stored.

Backup2tape means that with the classic Full/Diff/Incr setup we'll
need many tapes. Even if the data on the primary storage won't change. 

Backup2disk has the disadvantage that a corrupt filesystem (been
there, seen that...) can ruin TB's of backed up data. And we will need
file storage that is much bigger than the primary storage (keeping x
versions of a file).

Anyone else here with the same problem? Anyone (maybe Kern or Eric)
here that can tell if one of the upcoming new bacula features (dedup?)
could help to solve the problem with the massive amount of tapes
needed and the growing time windows and bandwidth for backups?

Any thought?

Thanks, Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental and full backups of the same host to different disks

2009-11-25 Thread Ralf Gross
Moby schrieb:
> It is my understanding that one must use the same job name for full and
> incremental backups, otherwise Bacula is not able to perform an
> incremental backup.
> I have a need to send full backups of machines to one disk and
> incremental backups to another disk.  If I have to use the same job
> definition for both incremental and full backups (with incremental and
> full backup levels being specified in the schedule definition), then how
> can I send full backups to one disk and incrementals to another?

you can overwrite the storage device and the pool in your schedule resource.


Schedule {
  Name = "Backup VU0EM005"
  Run = Level=Full 1st sun at 00:08
  Run = Level=Differential Storage=VU0EM005-DISK 
Pool=VU0EM005-Disk-Differential 2nd-5th sun at 00:08
  Run = Level=Incremental Storage=VU0EM005-DISK Pool=VU0EM005-Disk-Incremental 
tue-sat at 00:08
}

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Job: Warning: The following files are in the Catalog but not on the Volume(s):

2009-11-13 Thread Ralf Gross
Martin Simmons schrieb:
> >>>>> On Wed, 11 Nov 2009 14:56:33 +0100, Ralf Gross said:
> > 
> > Martin Simmons schrieb:
> > > >>>>> On Tue, 10 Nov 2009 10:54:26 +0100, Ralf Gross said:
> > > > 
> > > > Martin Simmons schrieb:
> > > > > >>>>> On Tue, 3 Nov 2009 09:51:17 +0100, Ralf Gross said:
> > > > > > 
> > > > > > bacula 3.0.2, psql, debian etch
> > > > > > 
> > > > > > Every now and then I receive error mails about missing files from 
> > > > > > verify jobs
> > > > > > where I can't find the problem.
> > > > > 
> > > > > Does it report all files as missing in that case or is it some subset?
> > > > 
> > > > Not all.
> > > > 
> > > >  
> > > > > What does the end of the email look like, i.e. the job report from 
> > > > > "Build OS:"
> > > > > to "Termination:"?
> > > > 
> > > > 
> > > > This verify job complained only about missing files in /usr.
> > > 
> > > Ah, this is a good clue.  What is the fileset definition?  Maybe it 
> > > includes
> > > the /usr directory twice?
> > 
> > 
> > The fileset looks ok to me.
> > 
> > FileSet {
> >   Name = Client
> >   Ignore FileSet Changes = yes
> >   Include {
> > Options {
> >   aclsupport = yes
> >   signature = MD5
> > }
> > File = /
> > File = /var
> > File = /boot
> >   }
> >   Exclude {
> > File = /media/*
> > File = /lost+found/*
> > File = /mnt/*
> > File = /dev/*
> > File = /sys/*
> > File = /proc/*
> > File = /tmp/*
> > File = /.journal
> > File = /.fsck
> >   }
> > }
> 
> Yes, that looks OK, except for the use of "*" in the Exclude clause.  The
> "File =" lines cannot contain wildcards.


I know that this shouldn't work, but it does ;)  Really!

bconsole -> restore:

$ cd ../tmp
cwd is: /tmp/
$ ls

$ cd ../mnt
cwd is: /mnt/
$ ls

But you are right, it would be better to use the correct syntax with wilddir.


 
> In your previous message, I see that Bacula reported:
> 
>   Files Expected: 50,755
>   Files Examined: 50,755
> 
> Does the backup contain exactly 50,755 files or has it miscounted them?


The numbers are correct, the same number of files werde backed up.

FD Files Written:   50,755
SD Files Written:   50,755


 
> Are you accidentally running two verify jobs at the same time?


Of the same client? No

Errr.yes

bacula.log.2.gz:31-Okt 10:05 VUMEM004-dir JobId 16737: Verifying against 
JobId=16728 Job=VU0EI001.2009-10-31_03.05.00_14
bacula.log.2.gz:31-Okt 10:05 VUMEM004-dir JobId 16736: Verifying against 
JobId=16728 Job=VU0EI001.2009-10-31_03.05.00_14

The schedule is a bit redundant

Schedule {
  Name = "Verify ITD"
  Run = Storage=Neo4100-LTO4-D3_P-Net 2nd sat at 10:05
  Run = Storage=ITD-DISK 1st,3rd,5th sat at 10:05
  Run = Storage=ITD-DISK 3rd-5th sat at 10:05
  Run = Storage=ITD-DISK mon-fri at 10:05
}


better...

Schedule {
  Name = "Verify ITD"
  Run = Storage=Neo4100-LTO4-D3_P-Net 2nd sat at 10:05
  Run = Storage=ITD-DISK 1st, 3rd-5th sat at 10:05
  Run = Storage=ITD-DISK mon-fri at 10:05
}

Thanks for the hint!

Ralf


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Job: Warning: The following files are in the Catalog but not on the Volume(s):

2009-11-11 Thread Ralf Gross
Martin Simmons schrieb:
> >>>>> On Tue, 10 Nov 2009 10:54:26 +0100, Ralf Gross said:
> > 
> > Martin Simmons schrieb:
> > > >>>>> On Tue, 3 Nov 2009 09:51:17 +0100, Ralf Gross said:
> > > > 
> > > > bacula 3.0.2, psql, debian etch
> > > > 
> > > > Every now and then I receive error mails about missing files from 
> > > > verify jobs
> > > > where I can't find the problem.
> > > 
> > > Does it report all files as missing in that case or is it some subset?
> > 
> > Not all.
> > 
> >  
> > > What does the end of the email look like, i.e. the job report from "Build 
> > > OS:"
> > > to "Termination:"?
> > 
> > 
> > This verify job complained only about missing files in /usr.
> 
> Ah, this is a good clue.  What is the fileset definition?  Maybe it includes
> the /usr directory twice?


The fileset looks ok to me.

FileSet {
  Name = Client
  Ignore FileSet Changes = yes
  Include {
Options {
  aclsupport = yes
  signature = MD5
}
File = /
File = /var
File = /boot
  }
  Exclude {
File = /media/*
File = /lost+found/*
File = /mnt/*
File = /dev/*
File = /sys/*
File = /proc/*
File = /tmp/*
File = /.journal
File = /.fsck
  }
}


Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Job: Warning: The following files are in the Catalog but not on the Volume(s):

2009-11-10 Thread Ralf Gross
Martin Simmons schrieb:
> >>>>> On Tue, 3 Nov 2009 09:51:17 +0100, Ralf Gross said:
> > 
> > bacula 3.0.2, psql, debian etch
> > 
> > Every now and then I receive error mails about missing files from verify 
> > jobs
> > where I can't find the problem.
> 
> Does it report all files as missing in that case or is it some subset?

Not all.

 
> What does the end of the email look like, i.e. the job report from "Build OS:"
> to "Termination:"?


This verify job complained only about missing files in /usr.


[...]
31-Okt 10:07 VUMEM004-sd JobId 16737: Forward spacing Volume "client-diff-0268" 
to file:block 0:227.
31-Okt 10:16 VUMEM004-sd JobId 16737: End of Volume at file 2 on device 
"ITD-DISK" (/data/bacula-storage/client), Volume "client-diff-0268"
31-Okt 10:16 VUMEM004-sd JobId 16737: End of all volumes.
31-Okt 10:18 VUMEM004-dir JobId 16737: Warning: The following files are in the 
Catalog but not on the Volume(s):
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man1/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man8/
[...]
31-Okt 10:18 VUMEM004-dir JobId 16737:   
/usr/share/man/man8/pma-configure.8.gz
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/sbin/pma-secure
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man8/pma-secure.8.gz
31-Okt 10:18 VUMEM004-dir JobId 16737: Bacula VUMEM004-dir 3.0.2 (18Jul09): 
31-Okt-2009 10:18:11
  Build OS:   x86_64-pc-linux-gnu debian 4.0
  JobId:  16737
  Job:VerifyClient.2009-10-31_10.05.00_37
  FileSet:Client
  Verify Level:   VolumeToCatalog
  Client: VUMEM004-fd
  Verify JobId:   16728
  Verify Job: Client
  Start time: 31-Okt-2009 10:05:00
  End time:   31-Okt-2009 10:18:11
  Files Expected: 50,755
  Files Examined: 50,755
  Non-fatal FD errors:0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Verify Differences

31-Okt 10:18 VUMEM004-dir JobId 16737: Begin pruning Jobs.
31-Okt 10:18 VUMEM004-dir JobId 16737: No Jobs found to prune.
31-Okt 10:18 VUMEM004-dir JobId 16737: Begin pruning Files.
31-Okt 10:18 VUMEM004-dir JobId 16737: Pruned Files from 2 Jobs for client 
VUMEM004-fd from catalog.
31-Okt 10:18 VUMEM004-dir JobId 16737: End auto prune.


 
 
> > The problem with this is that if it happens during the weekend and a new 
> > backup
> > job finishes before I can check what the problem was, I can't rerun the 
> > verify
> > with the old jobid, because bacula can only verify the last job that 
> > finished
> > for a client (it would be very helpful if this restriction would be 
> > eliminated
> > and one could check any jobid)
> 
> Can you schedule a second verify job immediately after the current one?


If this happens during the week it's possible. But this often happens
during full backups on saturday. Then the next incremental backup starts
before I'm back in office on monday.

I was able to run a new verify for one or two jobs. Both terminated
without error.

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Verify Job: Warning: The following files are in the Catalog but not on the Volume(s):

2009-11-03 Thread Ralf Gross
Hi,

bacula 3.0.2, psql, debian etch

Every now and then I receive error mails about missing files from verify jobs
where I can't find the problem.


The backup job:

31-Okt 03:34 VUMEM004-dir JobId 16728: Bacula VUMEM004-dir 3.0.2 (18Jul09): 
31-Okt-2009 03:34:20
  Build OS:   x86_64-pc-linux-gnu debian 4.0
  JobId:  16728
  Job:Client.2009-10-31_03.05.00_14
  Backup Level:   Differential, since=2009-10-10 03:05:02
  Client: "Client-fd" 2.4.4 (28Dec08) 
x86_64-pc-linux-gnu,debian,lenny/sid
  FileSet:"Client" 2008-03-06 17:49:14
  Pool:   "Client-Disk-Differential" (From Run pool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"Client-DISK" (From Pool resource)
  Scheduled time: 31-Okt-2009 03:05:00
  Start time: 31-Okt-2009 03:05:01
  End time:   31-Okt-2009 03:34:20
  Elapsed time:   29 mins 19 secs
  Priority:   10
  FD Files Written:   50,755
  SD Files Written:   50,755
  FD Bytes Written:   19,316,909,162 (19.31 GB)
  SD Bytes Written:   19,324,603,637 (19.32 GB)
  Rate:   10981.8 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): client-diff-0133|client-diff-0136|client-diff-0268
  Volume Session Id:  1418
  Volume Session Time:1251447295
  Last Volume Bytes:  14,153,152,558 (14.15 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


The verify job:


31-Okt 10:05 VUMEM004-dir JobId 16737: Verifying against JobId=16728 
Job=Client.2009-10-31_03.05.00_14
31-Okt 10:05 VUMEM004-dir JobId 16737: Bootstrap records written to 
/var/lib/bacula//VUMEM004-dir.restore.684.bsr
31-Okt 10:05 VUMEM004-dir JobId 16737: Start Verify JobId=16737 
Level=VolumeToCatalog Job=VerifyClient.2009-10-31_10.05.00_37
31-Okt 10:05 VUMEM004-dir JobId 16737: Using Device "Client-DISK"
31-Okt 10:05 VUMEM004-sd JobId 16737: Ready to read from volume 
"client-diff-0133" on device "Client-DISK" (/data/bacula-storage/client).
31-Okt 10:05 VUMEM004-sd JobId 16737: Forward spacing Volume "client-diff-0133" 
to file:block 2:1088929995.
31-Okt 10:07 VUMEM004-sd JobId 16737: End of Volume at file 3 on device 
"Client-DISK" (/data/bacula-storage/client), Volume "client-diff-0133"
31-Okt 10:07 VUMEM004-sd JobId 16737: Ready to read from volume 
"client-diff-0136" on device "Client-DISK" (/data/bacula-storage/client).
31-Okt 10:07 VUMEM004-sd JobId 16737: Forward spacing Volume "client-diff-0136" 
to file:block 0:227.
31-Okt 10:07 VUMEM004-sd JobId 16737: End of Volume at file 3 on device 
"Client-DISK" (/data/bacula-storage/client), Volume "client-diff-0136"
31-Okt 10:07 VUMEM004-sd JobId 16737: Ready to read from volume 
"client-diff-0268" on device "Client-DISK" (/data/bacula-storage/client).
31-Okt 10:07 VUMEM004-sd JobId 16737: Forward spacing Volume "client-diff-0268" 
to file:block 0:227.
31-Okt 10:16 VUMEM004-sd JobId 16737: End of Volume at file 2 on device 
"Client-DISK" (/data/bacula-storage/client), Volume "client-diff-0268"
31-Okt 10:16 VUMEM004-sd JobId 16737: End of all volumes.
31-Okt 10:18 VUMEM004-dir JobId 16737: Warning: The following files are in the 
Catalog but not on the Volume(s):
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man1/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man8/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/bin/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /etc/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /root/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /sys/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /boot/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /etc/network/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /etc/cron.daily/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /etc/postfix/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/lib/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/sbin/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/doc/wget/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/doc/libexpat1/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/man/man3/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/info/
31-Okt 10:18 VUMEM004-dir JobId 16737:   /usr/share/locale/pl/LC_MESSAGES/
[...]
31-Okt 10:18 VUMEM004-dir JobId 16737:   
/usr/share/phpmyadmin/themes/darkblue_orange/img/b_bookmark.png
[...]



Now a test restore of file b_bookmark.png from jobid 16728:

[...]
Enter JobId(s), comma separated, to restore: 16728
You have selected the following JobId: 16728

Building directory tree for JobId(s) 16728 ...  
+
45,855 files inserted into the tree.

You are now entering file selection mode where you add (mark) and
remove (

Re: [Bacula-users] Bacula features

2009-10-30 Thread Ralf Gross
Arno Lehmann schrieb:
> Hi,
> 
> 30.10.2009 11:56, James Harper wrote:
> > 
> >> -Original Message-----
> >> From: Ralf Gross [mailto:r...@stz-softwaretechnik.de] On Behalf Of Ralf
> > Gross
> >> Sent: Friday, 30 October 2009 21:47
> >> To: James Harper
> >> Cc: bacula-users@lists.sourceforge.net
> >> Subject: Re: [Bacula-users] Bacula features
> >>
> >> James Harper schrieb:
> >>>> Arno Lehmann schrieb:
> >>>>> 30.10.2009 07:24, Leslie Rhorer wrote:
> >>>>>
> >>>>>> 2. Span multiple target drives
> >>>>> Sure.
> >>>> I'm not sure if he might has thought of a single backup job
> > spanning
> >>>> multiple drives.
> >>>>
> >>>> This wouldn't be possible AFAIK.
> >>>>
> >>> It should work afaict. It certainly works with multiple tapes.
> >> Multile tapes is no problem, but I don't think bacula can switch
> >> drives during a backup job or write to multiple drives in parallel. I
> >> haven't seen an option for that.
> >>
> > 
> > Hmmm... to be honest I have never had cause to find out, but I have seen
> > bacula fill up a disk and ask for another volume. The autochanger script
> > should just take care of the rest.
> 
> I agree with James... from Baculas point of view, the volume files 
> don't matter, just the place where it finds them is important. And 
> that doesn't change. At least that's how I understand vchanger works.
> 
> Quite similar to tapes - you access the tape with a constant name, but 
> which tape is in the tape drive is independent from the tape drive's name.
> 
> I guess I'll have to try vchanger some day :-)

Maybe I wasn't clear what I meant. I meant that it's not possible for
one job to write to multiple (tape) drives simultaneously at the same
time. 

Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula features

2009-10-30 Thread Ralf Gross
James Harper schrieb:
> > Arno Lehmann schrieb:
> > >
> > > 30.10.2009 07:24, Leslie Rhorer wrote:
> > >
> > > > 2. Span multiple target drives
> > >
> > > Sure.
> > 
> > I'm not sure if he might has thought of a single backup job spanning
> > multiple drives.
> > 
> > This wouldn't be possible AFAIK.
> > 
> 
> It should work afaict. It certainly works with multiple tapes.

Multile tapes is no problem, but I don't think bacula can switch
drives during a backup job or write to multiple drives in parallel. I
haven't seen an option for that.

Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula features

2009-10-30 Thread Ralf Gross
Arno Lehmann schrieb:
> 
> 30.10.2009 07:24, Leslie Rhorer wrote:
>  
> > 2. Span multiple target drives
> 
> Sure.

I'm not sure if he might has thought of a single backup job spanning
multiple drives.

This wouldn't be possible AFAIK.

Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] First Backup Completed - but still confusion, full backup only 23M?

2009-10-23 Thread Ralf Gross
Gabriel - IP Guys schrieb:
> 
> 
> > -Original Message-
> > From: John Drescher [mailto:dresche...@gmail.com]
> > Sent: 22 October 2009 18:13
> > To: Gabriel - IP Guys; bacula-users
> > Subject: Re: [Bacula-users] First Backup Completed - but still
> > confusion, full backup only 23M?
> > 
> > 2009/10/22 Gabriel - IP Guys :
> > > Dear List,
> > >
> > >
> > >
> > > Finally! After some help from you guys, I've managed to get my first
> > full
> > > backup running!! Admittedly, the FD, SD, and DIR are all on the same
> > > machine, and the client is the machine in question... BUT, I got it
> > to work!
> > > First time using Bacula, didn't know anything about it two weeks
> ago,
> > and
> > > I've finally got it running. Only problem is, my first backup was
> > only 21M
> > > ?!?!? I've got 1.5G on disk. Now, I know that there are compression
> > > algorithms, but HEY! They are not THAT good! Any ideas, hints, tips
> > as to
> > > why a full backup is only 23M?
> > >
> > 
> > Bacula by default does not cross filesystem boundaries. Could this be
> > the issue?
> > 
> > John
> 
> There are no other mounts - I have everything in on the one disk. Below
> is an excerpt from the logs - I was going to use pastebin, but that
> won't show up in the future on the list archives. 


> 22-Oct 23:10 Bacula-Director JobId 16: shell command: run BeforeJob
> "/usr/lib/bacula/make_catalog_backup bacula bacula"
> ...
>   Job:BackupCatalog.2009-10-22_23.10.00_38

what do you expect to be backed up from the BackupCatalog job? If you
want to backup something else, you have to define and run a different
job...


Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-4

2009-10-21 Thread Ralf Gross
Bernardus Belang (Berni) schrieb:
> 
> I just want to know, whether or not IBM or HP LTO-4 tape drive connected 
> to  RedHat Enterprise Linux 5 will work with Bakula ?
> Thank you for your attention.

As long as you OS supports the drive and the changer bacula will be
working fine.

So first test your drive with the OS tools like mtx and mt. Then run
the btape tests.

http://www.bacula.org/3.0.x-manuals/en/problems/problems/Testing_Your_Tape_Drive.html


Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape Volumes

2009-10-14 Thread Ralf Gross
Pedro Bordin Hoffmann schrieb:
> Hello!
> Im doing some backup that required 2, 3 tapes.. so my question is..
> The label of first tape is monday. and when I put a second tape.. should I
> name it like monday-2  ? or something else? Or Monday again?
> When will I make a second time this backup, should I need to label the tape
> again? Or just put the tape when the other fills up?

AFAIK you can't have two volumes with the same name. Do the tapes have
barcodes? Then just use these barcodes and issue 'label barcode ...'
in bconsole. 

Ralf

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Device is BLOCKED waiting to create a volume for:...

2009-09-11 Thread Ralf Gross
Ralf Gross schrieb:
> John Drescher schrieb:
> > On Thu, Sep 10, 2009 at 6:37 AM, Ralf Gross  wrote:
> > >
> > > *list media pool=INV-MPC-Differential
> > > +-++---+-++--+--+-+--+---+---+-+
> > > | mediaid | volumename | volstatus | enabled | volbytes       | volfiles 
> > > | volretention | recycle | slot | inchanger | mediatype | lastwritten     
> > >     |
> > > +-++---+-++--+--+-+--+---+---+-+
> > > |     595 | A00194L4   | Recycle   |       1 |          1,024 |        0 
> > > |  946,080,000 |       0 |  104 |         1 | LTO4      |                 
> > >     |
> > > [...]
> > 
> > All your volumes say Recycle=0 and have a 10950 day retention period.
> > 
> > 946,080,000 / 60 / 60 /24= 10950
> 
> hm, but recycle = 0 only means that no automatic recycling should
> happen. The actual state is already Recycle.

John, maybe you were right. I *thought* I've always done manual
recycling this way. But I'm not sure anymore. Setting the volumes to
Purged and changing the Recycle flag worked.

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trying to restore data of server A's backup on server B

2009-09-11 Thread Ralf Gross
Willians Vivanco schrieb:
> Hi, i need to restore data of server A's backup on to server B 
> filesystem... Any suggestion?

err, where's the problem? What is not working? 

bconsole -> restore ->  -> Restore Client: 

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   >