Re: [Bacula-users] Network send error to SD. ERR=Connection reset by peer

2013-06-11 Thread Steve Thompson
On Tue, 11 Jun 2013, Leonardo - Mandic wrote:

> On old versions never have this problem, and its same network and same 
> servers of old bacula versions.

I have periodically had this problem on all versions of bacula that I have 
used back to 1.38, and have never been able to identify a network problem.

Steve

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] The table 'File' is full

2007-06-28 Thread Steve Thompson
Using Bacula 2.0.3 with MySQL 4.1.20 on a CentOS 4.5 x86 director. Doing a 
full backup of a new file system from a 2.0.3/CentOS 4.5/x86_64 client 
gives:

28-Jun 12:33 dante-dir: No prior Full backup Job record found.
28-Jun 12:33 dante-dir: No prior or suitable Full backup found in catalog. 
Doing FULL backup.
28-Jun 12:33 dante-dir: Start Backup JobId 1800, 
Job=inca-10_data1.2007-06-28_12.33.36
28-Jun 12:33 dante-sd: Volume "Backup-0551" previously written, moving to end 
of data.
28-Jun 12:33 dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error: 
sql_create.c:751 sql_create.c:751 insert INSERT INTO File 
(FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES (1,1800,8249353,345586,'gR 
BrMoF IGk B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz A A 
E','1B2M2Y8AsgTpgAmY7PhCfg') failed:
The table 'File' is full
28-Jun 12:33 dante-dir: sql_create.c:751 INSERT INTO File 
(FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES (1,1800,8249353,345586,'gR 
BrMoF IGk B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz A A 
E','1B2M2Y8AsgTpgAmY7PhCfg')
28-Jun 12:33 dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error: 
sql_create.c:753 Create db File record INSERT INTO File 
(FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES (1,1800,8249353,345586,'gR 
BrMoF IGk B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz A A 
E','1B2M2Y8AsgTpgAmY7PhCfg') failed. ERR=The table 'File' is full28-Jun 12:33 
dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error: catreq.c:476 
Attribute create error. sql_create.c:753 Create db File record INSERT INTO File 
(FileInde
,JobId,PathId,FilenameId,LStat,MD5) VALUES (1,1800,8249353,345586,'gR BrMoF IGk 
B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz A A E','1B2M2Y8AsgTpgAmY7PhCfg') failed. 
ERR=The table 'File' is full28-Jun 12:33 dante-dir: 
inca-10_data1.2007-06-28_12.33.36 Fatal error: sql_create.c:751 
sql_create.c:751 insert INSERT INTO File 
(FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES (2,1800,8249353,346071,'gR 
BrMtx IGk B RK P1 A A BAA A BGHnYa BGHnYa BGgaUz A A 
E','1B2M2Y8AsgTpgAmY7PhCfg') failed:
The table 'File' is full

...and so on for several thousand entries. There are about 10 million 
files in this file system, but the initial error message as shown above 
appears right at the beginning of the backup (within one minute). A 
'status dir' shows the backup as no longer running, but a 'status client' 
shows the backup as still active for another hour or so, when it 
disappears after apparently saving about 177,000 files.

I've tried starting mysqld with the big-tables option, but that shouldn't 
be needed with this version of mysql, and in any event it doesn't make any 
difference. The /var/bacula filesystem has plenty of free space (about 
8GB). The bacula mysql database is about 6GB in size, and there is about 
9GB free on that file system.

Que?

-Steve

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] The table 'File' is full

2007-06-28 Thread Steve Thompson
On Thu, 28 Jun 2007, David Romerstein wrote:

> On Thu, 28 Jun 2007, Steve Thompson wrote:
>
>> 28-Jun 12:33 dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error: 
>> sql_create.c:753 Create db File record INSERT INTO File 
>> (FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES 
>> (1,1800,8249353,345586,'gR BrMoF IGk B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz 
>> A A E','1B2M2Y8AsgTpgAmY7PhCfg') failed. ERR=The table 'File' is full28-Jun 
>> 12:33 dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error: catreq.c:476 
>> Attribute create error. sql_create.c:753 Create db File record INSERT INTO 
>> File (FileInde
>> ,JobId,PathId,FilenameId,LStat,MD5) VALUES (1,1800,8249353,345586,'gR BrMoF 
>> IGk B RK P1 A A BAA A BGHokQ BGHnZi BGgaUz A A E','1B2M2Y8AsgTpgAmY7PhCfg') 
>> failed. ERR=The table 'File' is full28-Jun 12:33 dante-dir: 
>> inca-10_data1.2007-06-28_12.33.36 Fatal error: sql_create.c:751 
>> sql_create.c:751 insert INSERT INTO File 
>> (FileIndex,JobId,PathId,FilenameId,LStat,MD5) VALUES 
>> (2,1800,8249353,346071,'gR BrMtx IGk B RK P1 A A BAA A BGHnYa BGHnYa BGgaUz 
>> A A E','1B2M2Y8AsgTpgAmY7PhCfg') failed:
>> The table 'File' is full
>
>> I've tried starting mysqld with the big-tables option, but that shouldn't
>> be needed with this version of mysql, and in any event it doesn't make any
>> difference. The /var/bacula filesystem has plenty of free space (about
>> 8GB). The bacula mysql database is about 6GB in size, and there is about
>> 9GB free on that file system.
>
> This is documented in the user manual:
>
> http://www.bacula.org/dev-manual/Catalog_Maintenance.html#SECTION00244

How I missed that I do not know. Thanks!

Steve

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore problem

2007-09-06 Thread Steve Thompson
Bacula 2.0.3, director is 32-bit CentOS 4.5, clients are all CentOS 4.5, 
both 32-bit and 64-bit. Backups are to disk files.

I find that I cannot do any restores:

06-Sep 13:59 dante-sd: RestoreFiles.2007-09-06_13.58.29 Error: block.c:275 
Volume data
error at 0:899088562! Wanted ID: "BB02", got "???^g". Buffer discarded. 
06-Sep 13:59 dante-dir: RestoreFiles.2007-09-06_13.58.29 Error: Bacula 2.0.3 
(06Mar07):
06-Sep-2007 13:59:39

I have seen messages in the archive from others with a similar problem, 
with suggestions that they are having hardware problems. I am not having 
hardware problems, however: I can cp the backups, dd them, bls them, 
bextract them, but I cannot run a restoration job to completion. I can 
exercise the hardware for days with no apparent problems.

I have deleted all the backups, reinitialized bacula from scratch, run 
full backups to different disk volumes, and tried a restore again: same 
result.

I'd be thankful for any clue stick that someone can hit me with.

Steve

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore problem

2007-09-07 Thread Steve Thompson
On Fri, 7 Sep 2007, Doytchin Spiridonov wrote:

> just to mention again that while there were suggestions this is a
> hardware problem we did a lot of tests and proved the problem is not
> hadrware but there is a bug (which however was closed with "unable to
> reproduce" reason)
>
> As you are the next case, can you please test it again ("have deleted
> all the backups, reinitialized bacula from scratch, run full backups
> to different disk volumes, and tried a restore again") but w/o
> enabling concurrent jobs (I bet you are) and see if it will be OK or
> you will get the same problems?

Very interesting.

My entire set of backup volumes, currently about 4.2 TB, was rsync'd to a 
separate system (no errors) and a restore attempt made there from a fresh 
bacula installation (using the same catalog but otherwise different h/w 
and s/w). This restore attempt also failed in the identical fashion to 
that described previously. The backup volumes are evidently corrupted at 
backup time.

On the original director, I then set "Max Concurrent Jobs" to 1 everywhere 
and ran a full backup of a 300 GB file system containing 151,155 files. 
Same hardware, same software, everything the same except for MCJ=1. I then 
restored all of these files without any error, which I have been unable to 
do with MCJ=2.

The conclusion is that it looks like there may indeed be a bug.

In any event, I shall run with MCJ=1 for now, and re-run full backups of 
all of my data (about 2 TB on this system), and then restore the whole lot 
to see what I get. If I get time I will take a peek at the source.

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore problem

2007-09-08 Thread Steve Thompson
On Fri, 7 Sep 2007, Dan Langille wrote:

> On 8 Sep 2007 at 2:42, Doytchin Spiridonov wrote:
>> At least we found a working solution (no concurrent jobs, because with
>> concurent jobs bacula was useless) hoping they will fix it sometime
>> when they receive enough proof that there IS a bug. You can reopen it
>> (as I'm not going to do it after I've got several times a response
>> "can't replicate, so there are no bugs") at bugs.bacula.org
> What do you suggest we do if we are unable to replicate the bug?
> What course[s] of action would you suggest?

OK, I will suggest a course of action.

I am sure that you would agree that enough people have reported this issue 
now to confirm that there is a major problem with concurrent job 
processing that is unrelated to any hardware issues.

I am sure you would also agree that people are running bacula for a 
reason, and they expect to be able to restore their data, and consequently 
they cannot enter into a testing regime with production systems to debug 
this. I can reproduce this problem at will, but I cannot use my own 
systems nor any customer systems for debugging it further, nor give access 
to anyone else to do the same. Now that it is known that using Max 
Concurrent Jobs greater than 1 can lead to volume corruption, no system 
that I manage can use concurrent jobs until the cause is known and fixed. 
And this will apply to everyone using bacula: test your restores 
regularly.

Presumably "you" (developers, not just you personally) have testing 
systems for which the actual backed up data is not important, and that can 
therefore be used to investigate this issue, and that you have a way to 
verify the structural integrity of the saved data volumes, and that you 
cannot expect folks running bacula in production to have the same. Since 
the developers also presumably have an interest in the functionality of 
the code base, and are familiar with the structure of that code, I would 
suggest that for such a major issue an inability to reproduce the problem 
by doing a number of successful restores is not sufficient cause to stop 
investigating it: it has to be worked on it until the cause is known. Let 
me state again that this is a major show-stopper problem. Obviously 
Doytchin has spent considerable time on it already, and his efforts allow 
both him and me, and probably many others, to run backups with a 
reasonable expectation of being able to restore.

I have some spare hardware that I can probably rig up for testing, but I 
have a business to run and my time is therefore limited. I am willing, 
however, to assist in whatever way I can, given these constraints.

Steve
--------
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RunAfterJob in bacula 2.2.4

2007-09-27 Thread Steve Thompson
I recently upgraded bacula from 2.0.3 to 2.2.4 on my director system 
(CentOS 4.5 i686). Since then, the RunAfterJob can no longer be 
successfully started (not even once):

27-Sep 11:36 XXX-dir: AfterJob: run command "/etc/bacula/after_catalog_backup"
27-Sep 11:36 XXX-dir: AfterJob: Bad address

The script has the proper ownership and permissions, and indeed is the 
same as the script used with 2.0.3. The RunBeforeJob script does run, and 
I can run the RunAfterJob script by hand as the bacula user.

What is the "Bad address" telling me?

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] painfully slow backups

2007-10-05 Thread Steve Thompson
On Wed, 26 Sep 2007, Ross Boylan wrote:

> I've been having really slow backups (13 hours) when I backup a large
> mail spool.  I've attached a run report.  There are about 1.4M files
> with a compressed size of 4G.  I get much better throughput (e.g.,
> 2,000KB/s vs 86KB/s for this job!) with other jobs.
>
> First, does it sound as if something is wrong?  I suspect the number of
> files is the key thing, and the mail  spool has lots of little files
> (it's used by Cyrus).  Is this just life when you have lots of little
> files?

How long does it take just to create a tar file containing those 1.4M 
files, with no bacula in the picture? Perhaps the file system type is of 
concern here.

Steve

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula database is getting too long

2012-01-11 Thread Steve Thompson
On Wed, 11 Jan 2012, Honia A wrote:

> But when I checked the size of the database it's still really large:
>
> root@servername:/var/lib/bacula# ls -l
> -rw--- 1 bacula bacula 208285783 2012-01-10 05:23 bacula.sql

Depending on what you are backing up, that is not really all that big. 
Mine is about 20 times that size.

-s

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Full backup forced if client changes

2012-03-24 Thread Steve Thompson
Bacula 5.0.2. For the following example job:

Job {
Name = "cbe_home_a"
JobDefs = "defjob"
Pool = Pool_cbe_home_a
Write Bootstrap = "/var/lib/bacula/cbe_home_a.bsr"
Client = clarke-fd
FileSet = "cbe_home_a"
Schedule = "Saturday3"
}

FileSet {
   Name = "cbe_home_a"
   Include {
 Options {
   wilddir = "/mnt/cbe/home/a*"
 }
 Options {
   exclude = yes
   RegexDir = ".*"
 }
 Options {
   compression = GZIP
   sparse = yes
   noatime = yes
 }
 File = /mnt/cbe/home
   }
}

more than one client is available to backup the (shared) storage. If I 
change the name of the client in the Job definition, a full backup always
occurs the next time a job is run. How do I avoid this?

Steve


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup forced if client changes

2012-04-02 Thread Steve Thompson
On Sat, 24 Mar 2012, James Harper wrote:

>> more than one client is available to backup the (shared) storage. If I change
>> the name of the client in the Job definition, a full backup always occurs the
>> next time a job is run. How do I avoid this?
>
> That's definitely going to confuse Bacula. As far as it is concerned you 
> are backing up a separate client with separate storage.

I still don't follow this. The client has changed, but everything else 
(pool, storage, catalog, etc) is the same. I don't see why a full backup 
is forced, or why bacula should be even slightly confused.

Steve

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset: need a second pair of eyes

2012-04-16 Thread Steve Thompson
Bacula 5.0.2. This fileset:

FileSet {
   Name = "toe_home_x"
   Include {
 Options {
   exclude = yes
   wilddir = "/mnt/toe/data*/home/*/.NetBin"
   wilddir = "/mnt/toe/data*/home/*/.Trash"
   wilddir = "/mnt/toe/data*/home/*0"
   wilddir = "/mnt/toe/data*/home/*1"
   wilddir = "/mnt/toe/data*/home/*2"
   wilddir = "/mnt/toe/data*/home/*3"
   wilddir = "/mnt/toe/data*/home/*4"
   wilddir = "/mnt/toe/data*/home/*5"
   wilddir = "/mnt/toe/data*/home/*6"
   wilddir = "/mnt/toe/data*/home/*7"
   wilddir = "/mnt/toe/data*/home/*8"
   wilddir = "/mnt/toe/data*/home/*9"
 }
 Options {
   compression = GZIP
   sparse = yes
   noatime = yes
 }
 File = /mnt/toe/data1/home
 File = /mnt/toe/data2/home
   }
}

is intended to backup the entire contents of the two file systems 
/mnt/toe/data1/home and /mnt/toe/data2/home with the exception of the 
first-level directories that end with a number (the directories to be 
included are variable in name and number). Well, it works, except that it 
does not backup any directories (and their contents) in (say) 
/mnt/toe/data1/home/foo that have white space in their names. What have I 
done wrong?

Steve
-- 

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset: need a second pair of eyes

2012-04-17 Thread Steve Thompson
On Tue, 17 Apr 2012, Martin Simmons wrote:

> Are you sure it is related to white space?  I don't see anything in the above
> FileSet that would cause it.  Maybe the missing directories are part of a
> different filesystem mounted on top of the main one?

There's only one file system and no nested mounts. The missing files are 
all over the file system at different depth levels, and the only thing
that I could see in common between them was a space in the name.

> You could test it with a much simpler fileset:
>
> FileSet {
>   Name = "toe_home_x"
>   Include {
> File = /mnt/toe/data1/home/foo
>   }
> }

Yes, I've done this; it works.

Steve

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] max run time

2012-07-14 Thread Steve Thompson
Bacula 5.0.2, CentOS 5.8.

I have this in my job definitions:

Full Max Run Time = 29d

but still they are terminated after 6 days:

14-Jul 20:27 cbe-dir JobId 39969: Fatal error: Network error with FD
during Backup: ERR=Interrupted system call
14-Jul 20:27 cbe-dir JobId 39969: Fatal error: No Job status returned from FD.
14-Jul 20:27 cbe-dir JobId 39969: Error: Watchdog sending kill after
518426 secs to thread stalled reading File

I like to know how to fix this.

I've seen the comments in the mailing list in the past that running 
backups that take more than 6 days is "insane". They're wrong in my 
environment. I don't want to hear that again. I have a genuine reason for 
running very long backups and I need to know how to make it work.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-14 Thread Steve Thompson
On Sat, 14 Jul 2012, Joseph Spenner wrote:

> That's insane!  :)

Heh :)

> Ok, can you maybe carve it up a little?  How big is the backup?

I have already carved it up just about as much as I can. I have to back up 
about 6 TB in 28 million files (that change very slowly) to a remote 
offsite SD. I get at most 5 MB/sec with software compression (backups go 
to disk on the SD), so we're looking at around 2 weeks run time for a full 
backup (and 30 minutes for an incremental). The file system that this 6 TB 
sits on is about 60 TB in size and is carved up into about 30 jobs 
already.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-15 Thread Steve Thompson
On Sat, 14 Jul 2012, Boutin, Stephen wrote:

> Try changing (or adding if you don't have it already) the heartbeat 
> interval variable. I have about 160TB I'm currently backing up total & 
> some of the boxes are 8-29TB jobs. Heartbeat is must, for large jobs, as 
> far as I'm concerned.

Good idea, but unfortunately I already have the heartbeat interval set to
120 seconds.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-15 Thread Steve Thompson
On Sun, 15 Jul 2012, Thomas Lohman wrote:

> This actually is a hardcoded "sanity" check in the code itself.  Search
> the mailing lists from the past year.  I'm pretty sure I posted where in
> the code this was and what needed to be changed.

Excellent; thank you! I have found your post and the relevant code.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-19 Thread Steve Thompson
On Thu, 19 Jul 2012, Dan Langille wrote:

> On 2012-07-15 13:48, Steve Thompson wrote:
>> On Sun, 15 Jul 2012, Thomas Lohman wrote:
>> 
>>> This actually is a hardcoded "sanity" check in the code itself.  Search
>>> the mailing lists from the past year.  I'm pretty sure I posted where in
>>> the code this was and what needed to be changed.
>> 
>> Excellent; thank you! I have found your post and the relevant code.
>
> Please let us know how that went.

I will do so. The long job is running but is only three days into the run 
at the moment.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] fileset: second eyes needed

2012-07-25 Thread Steve Thompson
Bacula 5.0.2, CentOS 5.8 x86_64.

I need a second pair of eyes on this.

I have a large file system (/mnt/home, 60TB) whose backups are split into 
multiple jobs. Two jobs in particular: one to backup all directories whose 
names begin with "s" except for st123, and a second job to backup just st123:

Job {
   Name = "home_s"
   JobDefs = "defjob"
   Pool = Pool_home_s
   Write Bootstrap = "/var/lib/bacula/home_s.bsr"
   Client = client-fd
   FileSet = "home_s"
   Schedule = "Saturday4"
}

Job {
   Name = "home_st123"
   JobDefs = "defjob"
   Pool = Pool_home_st123
   Write Bootstrap = "/var/lib/bacula/home_st123.bsr"
   Client = client-fd
   FileSet = "home_st123"
   Schedule = "Saturday1"
}

The file sets are as follows:

FileSet {
   Name = "home_s"
   Include {
 Options {
   wilddir = "/mnt/home/s*"
 }
 Options {
   exclude = yes
   wilddir = "/mnt/home/st123"
   RegexDir = ".*"
 }
 Options {
   compression = GZIP4
   sparse = yes
   noatime = yes
 }
 File = /mnt/home
   }
}

FileSet {
   Name = "home_st123"
   Include {
 Options {
   compression = GZIP4
   sparse = yes
   noatime = yes
 }
 File = /mnt/home/st123
   }
}

All is well with the home_st123 job: it backups up only what is expected. 
However, the home_s job backups up only the directories whose name begins
with "s", but it also backs up st123, which has been excluded. Presumably
I have the Options clauses incorrectly defined?

TIA,

Steve
-- 

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-29 Thread Steve Thompson
On Thu, 19 Jul 2012, Dan Langille wrote:

> On 2012-07-15 13:48, Steve Thompson wrote:
>> On Sun, 15 Jul 2012, Thomas Lohman wrote:
>> 
>>> This actually is a hardcoded "sanity" check in the code itself.  Search
>>> the mailing lists from the past year.  I'm pretty sure I posted where in
>>> the code this was and what needed to be changed.
>> 
>> Excellent; thank you! I have found your post and the relevant code.
>
> Please let us know how that went.

The long backup finally completed; I was expecting a run time of 14 days,
but it actually took 12 days and 2 hours. In bacula 5.0.2, the code to
be modified is at line 687 of lib/bnet.c, and line 76 of lib/bsock.c; the
modifications are obvious.

Steve

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backup vs restore performance

2013-01-14 Thread Steve Thompson
Bacula 5.2.10, CentOS 5/6, x86_64.

Just a curiosity. I note that full backup performance across many systems 
is typically in the 6-10 MB/sec range; I am using GZIP4 and the backups 
are typically compute bound doing software compression (clients are Xeons 
in the 3GHz range, and the SD is a 2GHz Xeon system, with all backups done 
to disk. The SD is about half a mile distant). However, if I do a restore 
of a large volume of data, I get 32-35 MB/sec. Seems a little odd that it 
is so asymmetrical.

Steve
-- 

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122412
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup vs restore performance

2013-01-14 Thread Steve Thompson
On Mon, 14 Jan 2013, John Drescher wrote:

> I would say this is a combination of filesystem performance ( remember
> that when you backup there can be a lot of seeks that reduce
> performance) and decompression performance. Decompression is less CPU
> intensive than compression.

Ah yes, you're right. A gunzip on some test files is indeed 4-5 times 
faster than a gzip on the same data. I never noticed that big a difference 
before.

Steve

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122412
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad response to Append Data command.

2007-12-05 Thread Steve Thompson
On Wed, 5 Dec 2007, Martin Simmons wrote:

>>>>>> On Wed, 5 Dec 2007 11:36:33 -0500 (EST), Steve Thompson said:
>> I see this very often as well, and I am using disk exclusively. It also
>> happens about 40% of the time, and has done since I started with bacula at
>> 1.38 (now on 2.2.4). I'd like to see a proper explanation of what this
>> message really means. It's certainly annoying.
>
> It is a generic error message meaning "the SD did like something" so doesn't
> tell you much.  Sometimes the text after the word "got" is useful, but more
> often you have to look at the previous messages to find out what it didn't
> like.

This is all I get, consistently:

05-Dec 13:02 vger-dir: No prior Full backup Job record found.
05-Dec 13:02 vger-dir: No prior or suitable Full backup found in catalog. Doing 
FULL backup.
05-Dec 13:02 vger-dir: Start Backup JobId 1093, 
Job=vger_data1.2007-12-05_13.02.03
05-Dec 13:02 vger-dir: There are no more Jobs associated with Volume 
"Backup-0073". Marking it purged.
05-Dec 13:02 vger-dir: All records pruned from Volume "Backup-0073"; marking it 
"Purged"
05-Dec 13:02 vger-dir: Recycled volume "Backup-0073"
05-Dec 13:02 vger-dir: Using Device "Backup"
05-Dec 13:02 vger-fd: vger_data1.2007-12-05_13.02.03 Fatal error: job.c:1811 
Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

Steve

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad response to Append Data command.

2007-12-05 Thread Steve Thompson
On Wed, 5 Dec 2007, Martin Simmons wrote:

>>>>>> On Wed, 5 Dec 2007 13:46:26 -0500 (EST), Steve Thompson said:
>> They are all the same at 2.2.4. It happens even in the case where
>> bacula-dir, bacula-fd and bacula-sd are running on the same machine.
>> Everything was rebuilt from source by myself, and installed on a clean
>> O/S, but I have had this problem with every single version of bacula since
>> 1.38.
> The SD must be failing to pass the error message back to the Director.
> Do you see messages any from the SD in your logs (e.g. about recycling)?  If
> not, check that the SD's Messages resource is configured correctly, e.g. if
> your bacula-sd.conf contains
> [...]

Yes, this is all correct.

> If that is OK, then I suggest running the SD with debug level 200, which might
> give us a clue where the error occurs.

Will do, and will report back.

Thanks,
Steve

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad response to Append Data command.

2007-12-05 Thread Steve Thompson
On Wed, 5 Dec 2007, Dan Langille wrote:

> My first idea: different versions of SD and FD, with one trying to use a 
> command the other does not recognize.
> What version is each of: bacula-dir, bacula-fd, bacula-sd

They are all the same at 2.2.4. It happens even in the case where 
bacula-dir, bacula-fd and bacula-sd are running on the same machine. 
Everything was rebuilt from source by myself, and installed on a clean 
O/S, but I have had this problem with every single version of bacula since 
1.38.

Steve

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad response to Append Data command.

2007-12-05 Thread Steve Thompson
On Wed, 5 Dec 2007, [EMAIL PROTECTED] wrote:

> I am still experiencing this problem on a regular basis; not every job
> does this, but it seems a good 40% do each night.
> [...]
> 05-Dec 03:33 escabot-fd JobId 8219: Fatal error: job.c:1811 Bad response 
> to Append Data command. Wanted 3000 OK data

I see this very often as well, and I am using disk exclusively. It also 
happens about 40% of the time, and has done since I started with bacula at 
1.38 (now on 2.2.4). I'd like to see a proper explanation of what this
message really means. It's certainly annoying.

Steve

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad response to Append Data command.

2007-12-08 Thread Steve Thompson
On Wed, 5 Dec 2007, Martin Simmons wrote:

> If that is OK, then I suggest running the SD with debug level 200, which 
> might give us a clue where the error occurs.

So far I have been unable to get it to fail using -d200, while it does 
fail if I don't specify a debug level. Maybe there is a timing issue. I'll 
keep trying.

Steve

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Missing volumes found again

2007-12-16 Thread Steve Thompson
On Sun, 16 Dec 2007, David Legg wrote:

> I'm sure this is all 'obvious' to the old hacks but is there a way to
> prevent files being written into the mount point when no drive is
> actually mounted?

Just do your backups to a subdirectory on the drive which is not
present below the mount point when the drive is not mounted.

Steve


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Files Examined?

2007-12-22 Thread Steve Thompson
Here's something interesting. Bacula 2.2.4 on both client (64-bit CentOS 
4.5) and director (32-bit CentOS 4.5).

During a backup:

JobId 3516 Job asimov_data7.2007-12-20_23.00.08 is running.
 Backup Job started: 20-Dec-07 23:44
 Files=111,850 Bytes=1,863,801,574 Bytes/sec=15,977 Errors=0
 Files Examined=16,999,856
 Processing file: /mnt/mda/data7/XXX
 SDReadSeqNo=5 fd=14

The "Files Examined" count will eventually go to about 18.5 million by the 
end of the backup (for every full or incremental backup). However, the 
include list for this job includes only a single relatively static file 
system, on which there are only just over 13 million inodes in use:

FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/sdh136552704 13377854 23174850   37% /mnt/mda/data7

So what is the "Files Examined" count really telling me? The JobFiles
count from a 'list job' is correct, however.

Secondly, note the very low bytes/sec figure for this backup. This is an 
ext3 file system with dir_index turned off: an incremental backup takes 
25-30 hours to complete. I have an identical copy of this file system on 
the same client with dir_index turned on: an incremental backup of this 
takes only 4 hours. Both are SAS 300GB 10k rpm, RAID-1. Lots of 
directories over 4K, I suspect, but I don't know the count.

Steve

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Files Examined?

2007-12-24 Thread Steve Thompson
On Mon, 24 Dec 2007, Martin Simmons wrote:

>>>>>> On Sat, 22 Dec 2007 08:29:54 -0500 (EST), Steve Thompson said:
>> [...]
>> So what is the "Files Examined" count really telling me? The JobFiles
>> count from a 'list job' is correct, however.
> Do you have lots of hard links in that filesystem?  Bacula will count those as
> files but df will not show them.

Duh, yes, of course.

> A Bacula backup has to stat every file in the filesystem, so dir_index could
> make quite a difference.

Sure, I understand that. I was just surprised at the magnitude of the 
difference.

Steve

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] commas

2008-02-28 Thread Steve Thompson
In the output of commands such as 'list jobs', is it possible to configure 
bacula to display numeric quantities as digits alone (no commas)? I really 
find this difficult to read.

-s

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] commas

2008-02-29 Thread Steve Thompson
On Fri, 29 Feb 2008, Ryan Novosielski wrote:

> Martin Simmons wrote:
>>>>>>> On Thu, 28 Feb 2008 11:20:26 -0500 (EST), Steve Thompson said:
>>> In the output of commands such as 'list jobs', is it possible to configure
>>> bacula to display numeric quantities as digits alone (no commas)? I really
>>> find this difficult to read.
>>
>> I don't think it is possible, except by editing the code.  I find it annoying
>> too, especially for jobids, because it prevents easy copy/paste to other
>> bconsole commands and searching in logs.
>
> I'd want to see it fixed in jobids, but not other places. It makes
> totals actually MORE readable (though I realize internationally, that
> might be subjective).

This may well depend a lot on the individual. Certainly the JobId field 
needs to be displayed without commas in all situations; it is too hard to 
cut and past and too easy to make a mistake otherwise. For other fields, 
such as JobBytes, I find something like "52,610,802,628" harder to read, 
and would prefer it without commas. But I would really prefer that to be 
"52.610 GB" or (better) "48.997 GiB".

> I suspect it is a very easy change to make though, if one would want to.
> I'm not sure how the tables are generated in bconsole.

It looks like a dump of what is returned from the database server, 
without any interpretation by bconsole.

Steve

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What causes orphaned Path/Files in database?

2008-10-16 Thread Steve Thompson
On Thu, 16 Oct 2008, [EMAIL PROTECTED] wrote:

> In our environment, dbcheck takes about 3 days to run. The catalog is about
> 21GB, and the server is a dual-proc 3.2GHz, 32-bit machine with 12GB of RAM,
> running bacula 1.38.11 and MySQL 5.?.

That sounds like rather a long time. One of my bacula environments has a 
current catalog size of about 12 GB, and on a 3.0 GHz Core2 box (two cores 
total), CentOS 5.2, dbcheck takes about an hour or so. Obviously this is a 
much faster machine than you are using, but it shouldb't make _that_ much 
difference. I'm doing a:

CREATE INDEX file_tmp_pathid_idx ON File (PathId);
CREATE INDEX file_tmp_filenameid_idx ON File (FilenameId);

before the dbcheck, which produces an enormous speedup.

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What new feature are you waiting for?

2008-11-05 Thread Steve Thompson
On Tue, 4 Nov 2008, John Drescher wrote:

>>  Converting it to an XML file would not pose the problems specified in 
>> the above wiki, there are lots of tools to create/parse XML files tha 
>> could be useful.
>>
> I would vote against this if I could. I mean this will make it harder
> for me to edit the configuration files through ssh and to me any gui
> tools to edit the files will just get in the way being that I have 40+
> clients and about 75 different jobs.

I also strongly vote against this, for the same reasons.

Steve

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What new feature are you waiting for?

2008-11-05 Thread Steve Thompson
On Wed, 5 Nov 2008, Hemant Shah wrote:

> I use/edit XML just about everyday and I do not find it difficult. 
> Converting current config to XML should be pretty easy.

I don't find it difficult either, but that doesn't mean I like the idea at 
all; in fact I detest XML with a passion. And the admin that comes after 
me might not understand XML even a little bit. There are too many other 
things that need doing in bacula to spend time futzing with a part that 
works. I use bacula in more than one installation, and I can say right now 
that if the config files become XML only, even if there is a fancy tool to 
edit them, I will stop using bacula. No doubt there are many that will 
find this view unreasonable, but I can't help that.

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Multiple pools and volume names

2008-12-04 Thread Steve Thompson
Bacula 2.4.2.

I have just added a second pool to an all-disk configuration and have a 
question concerning automatic volume numbering. The relevant details are:

Pool {
Name = Foo_Pool
Storage = Foo_Storage
Label Format = "Foo-"
...
}

Pool {
Name = Bar_Pool
Storage = Bar_Storage
Label Format = "Bar-"
...
}

Single catalog. When the second pool was added, there were 3260 volumes in 
Foo_Pool (from Foo-0001 to Foo-3260). Everything works, but the first 
backup that went to Bar_Pool created a volume called Bar-3261, when I was 
expecting Bar-0001. Why is this?

TIA,
Steve

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple pools and volume names

2008-12-04 Thread Steve Thompson
On Thu, 4 Dec 2008, Arno Lehmann wrote:

> 04.12.2008 19:41, Steve Thompson wrote:
>> Single catalog. When the second pool was added, there were 3260 volumes in 
> Foo_Pool (from Foo-0001 to Foo-3260). Everything works, but the first
>> backup that went to Bar_Pool created a volume called Bar-3261, when I was
>> expecting Bar-0001. Why is this?
>
> That's because Bacula uses the overall number of known volumes, not
> the number of volumes per pool. This decreases the probability that it
> tries to create a volume which already exists (though it doesn't
> absolutely prevent that).

Arno,

Thanks for your reply. I do indeed see from the description of the 
automatic volume labeling feature that this is the case, and that this is 
what bacula actually does. In the 'Pool' section, under the description of 
Label Format, however, it says otherwise:

"If no variable expansion characters are found in the string, the
Volume name will be formed from the format string appended with the
number of volumes in the pool plus one,..."

where it actually means "...number of volumes in the catalog plus one...". 
I will probably switch to using multiple catalogs.

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building directory tree is very slowly

2007-01-10 Thread Steve Thompson
On Wed, 10 Jan 2007, Thomas Glatthor wrote:

> for a 2,500,000 file-job my machine needs ~ 15 minutes
> the databasequery is finished after ~1 minute,
> but the director needs 14 minutes to build the tree or whatever happens
> in this time .
> also is the dir only using one cpu for his work, instead of four.
>
> i don't think that the database is the problem.

I have also noticed the very same thing: when doing a restore from a
2,000,000-file job (for both 1.38.1 and 2.0.0) the initial SQL activity is
over with pretty quickly, but then the director is CPU-bound for minutes
at a stretch with no I/O being performed.

Steve
--------
Steve Thompson E-mail:  [EMAIL PROTECTED]
Voyager Software LLC   Web: http://www.vgersoft.com
39 Smugglers Path  VSW Support: [EMAIL PROTECTED]
Ithaca, NY 14850
  "186,300 miles per second: it's not just a good idea, it's the law"


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Director segfaults when restoring an OS X PPC system

2007-01-24 Thread Steve Thompson
Bacula 2.0.0 on both director and clients.

I recently installed Bacula on two Mac OS X systems (one at 10.3 on a 
PowerPC box, and one at 10.4 on an Intel box). Both were compiled from 
source. The director is running on an RHEL3 x86 box (fully up2date), with 
bacula installed from RPM. I am using MySQL 3.23.58 as it comes with 
RHEL3.

Backups of both OS X systems work well. Restores to the Intel OS X system 
also work well. Restores to the PPC OS X system cause the director to 
segfault within seconds after starting the restore job running. If I 
modify the restore job to restore to a different FD, it works. Anyone seen 
this?

Steve

Steve Thompson E-mail:  [EMAIL PROTECTED]
Voyager Software LLC   Web: http://www.vgersoft.com
39 Smugglers Path  VSW Support: [EMAIL PROTECTED]
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bad response to Append Data

2007-02-13 Thread Steve Thompson
Bacula 2.0.1, backing up to disk exclusively, using once device and the 
default pool. I get this about 20% of the time:

12-Feb 23:11 vger-dir: Start Backup JobId 601, Job=vger_u2.2007-02-12_23.05.02
12-Feb 23:11 vger-dir: Recycled volume "Backup-0065"
12-Feb 23:11 vger-sd: vger_u2.2007-02-12_23.05.02 Fatal error: acquire.c:355 
Wanted Volume "Backup-0065", but device "Backup" (/fs/vsw/b1/BACULA/VOLUMES) is 
busy writing on "Backup-0014" .
12-Feb 23:11 vger-fd: vger_u2.2007-02-12_23.05.02 Fatal error: job.c:1752 Bad 
response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

OK, so we can't write to two different volumes at once. What is the 
advised way to fix this?

-s

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] limit the size of storage files

2007-02-20 Thread Steve Thompson
On Tue, 20 Feb 2007, Steve Barnes wrote:

> How about coining a new "word"  RTNM (read the nice manual), or RTGM
> (read the good manual) or RTBM (read the big manual).  :-)

RTFBM :)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Rename Storage Daemon

2008-12-18 Thread Steve Thompson
On Thu, 18 Dec 2008, Stefan Sorin Nicolin wrote:

> I am about to reconfigure a mid sized Bacula intallation. I'd like to
> rename the storage daemon meaning the "Name" directive in the Storage
> { } block. Is this asking for trouble? Right now I am a bit nervous
> because I just learned (the hard way) that renaming jobs doesn't go
> well with restoring old files...

I have renamed a storage daemon several times with no issues.

Steve

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Monitoring bacula with Nagios

2009-03-27 Thread Steve Thompson
On Fri, 27 Mar 2009, John Drescher wrote:

> Have you ever seen bacula die? I mean in 5 years of using bacula on 35
> to 50 machines I do not recall ever seeing bacula die.

Yep; storage daemon (2.4.2) dies on me about once a month. I get a file 
daemon failure about once a month too, but of course there's a lot more
of those.

Incidentally, when the file daemon dies, the job sits in the "waiting for 
FD to connect" state forever. The director has to be restarted, otherwise
no further jobs start.

Steve

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Execution order of scheduled jobs

2009-07-23 Thread Steve Thompson
On Thu, 23 Jul 2009, Jeronimo Zucco wrote:

> I did'nt find how to do that. I've tried to change the order of my
> Jobs directive and Client in bacula-dir.conf, but without efect. My
> question is:
>
> How bacula determine order of execution of scheduled jobs with use the
> same schedule ? Is it based in a sql query, or order of declaration os
> some directive in bacula-dir ? What directive ?

In my experience, jobs of the same priority always start in the order
in which they are defined in bacula-dir.conf. Did you reload the
director after changing the order?

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Average Load? Solaris vs. Linux

2009-11-30 Thread Steve Thompson
On Mon, 30 Nov 2009, Stephen Thompson wrote:

> shows a consistent load average on Solaris of between 0-1 (occasional
> peaks above) and a consistent load average on Linux of 2-4 (occasional
> peaks above, though seldom below unless "literally" idle).
> [...]
> To say the least, this is rather disappointing as a Linux fan.
> Does anyone have an explanation or remediation for this?

Since you have the same name as me, I'll reply :)

What you are probably seeing is the fact that I believe that the load 
average is not measuring the same thing on Solaris and Linux. On Linux, it 
includes processes in uninterruptible sleep (eg disk I/O), and on Solaris 
it does not. It is reasonable for the value to be higher on Linux. Whether 
or not either method of calculation is more reasonable is of course a 
different question.

Steve 
---- 
Steve Thompson E-mail:  smt AT vgersoft DOT com Voyager Software LLC Web: 
http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT 
vgersoft DOT com Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] client rejected Hello command

2010-03-29 Thread Steve Thompson
On Mon, 29 Mar 2010, Roland Roberts wrote:

> It was a major upgrade reboot was part of the process.  And it's been
> rebooted since then.

What does a "telnet archos.rlent.pnet 9102" give you?

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"


--
Download IntelĀ® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ClientRunBeforeJob question

2010-08-11 Thread Steve Thompson
Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script 
that returns a non-zero status causes the job to be cancelled. This is not 
what appears to happen, however. Instead a fatal error is declared:

11-Aug 13:30 cbe-dir JobId 686: No prior Full backup Job record found.
11-Aug 13:30 cbe-dir JobId 686: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
11-Aug 13:30 cbe-dir JobId 686: Start Backup JobId 686, 
Job=baxter_fs1b.2010-08-11_13.30.26_09
11-Aug 13:30 cbe-dir JobId 686: Using Device "Data1"
11-Aug 13:30 toe-fd JobId 686: shell command: run ClientRunBeforeJob 
"/etc/bacula/cbe_hanfs.sh /mnt/baxter/fs1"
11-Aug 13:30 toe-fd JobId 686: Error: Runscript: ClientRunBeforeJob returned 
non-zero status=1. ERR=Child exited with code 1
11-Aug 13:30 cbe-dir JobId 686: Fatal error: Bad response to ClientRunBeforeJob 
command: wanted 2000 OK RunBefore
, got 2905 Bad RunBeforeJob command.
11-Aug 13:30 cbe-dir JobId 686: Fatal error: Client "toe-fd" RunScript failed.
11-Aug 13:30 cbe-dir JobId 686: Error: Bacula cbe-dir 5.0.2 (28Apr10): 
11-Aug-2010 13:30:29

This causes alerts to be generated and leaves a failed job in the 
database. I don't see any obvious way to fix this; does anyone know how?

Steve

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ClientRunBeforeJob question

2010-08-11 Thread Steve Thompson
On Wed, 11 Aug 2010, John Drescher wrote:

>> Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
>> that returns a non-zero status causes the job to be cancelled. This is not
>> what appears to happen, however. Instead a fatal error is declared:
>
> Maybe the documentation should be better written because it appears to
> have done exactly what I expected. I mean the run script returns a non
> zero code ( meaning error ) and the job was canceled.

Ah, I see that "run before job and cancel" with a non-zero exit status 
actually means "run as part of the job and kill it". Really what I want is 
for a non-zero exit status from the ClientRunBeforeJob script to act as if 
the job had never been run at all; ie no diagnostics, no database entry, 
no e-mail, etc. Acting as a cancel of a pending job does would be enough.

What I have here are two servers using DRBD+heartbeat to export several 
file systems that are built on top of DRBD devices. Each of the two 
servers runs its own backups, and there is a third (fourth...) backup of 
the highly-available file systems, each using a different pool. I'm 
looking for an elegant way to backup the HAFS. I've tried:

(1) two jobs backing up the HAFS, one on each host, using the 
ClientRunBeforeJob to determine which one is active. It works, but is not 
very clean looking.

(2) separate backup jobs for the servers with a third bacula client, the 
virtual IP, backing up the HAFS. I'm using TLS, and can't get this to work 
(the fd's certificate having the wrong common name, it being the name 
corresponding to the VIP rather than one of the two server names, which 
are of course different). I'd rather do it this way if anyone has managed 
to get it to work.

Steve

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spooling and backup concept

2010-08-26 Thread Steve Thompson
On Thu, 26 Aug 2010, m...@free-minds.net wrote:

> 1) do we really need to spool? We are not writing to real tapes, we have a
> filesystem as backend (ext3 over glusterfs).

I am one of those that believes spooling to be useful even when writing 
backups to disk; it is obviously not in question that it's good when 
writing to tape. I see the advantages as having the backup volumes less 
fragmented than when writing directly without a spool, especially when 
running many concurrent job, and this increases restoration performance. 
In some scenarios, the spool may be faster anyway; for example, at home I 
spool jobs to a collection of SCSI drives with the pools situated on a set 
of ATA and USB drives (and btw, USB drives are sooo slow).

Steve

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Software compression: None

2010-11-18 Thread Steve Thompson
Bacula 5.0.2, CentOS 5.5, x86_64.

Recently noticed a few "Software Compression: None" reports lately, 
affecting about 10% of my backup jobs, both large and small. Software 
compression is most definitely turned on. What is this telling me?

Steve
--------
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-04 Thread Steve Thompson
On Wed, 1 Dec 2010, Henrik Johansen wrote:

> The remaining posts will follow over the next month or so.

Just a minor question from part III. You state that your storage servers 
each use three Perc 6/E controllers, allowing the attachment of 9 MD1000 
shelves. I believe that you can attach 6 shelves, not 3, to a single Perc 
6/E (3 to each port).

Steve

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Interesting performance observation

2010-12-05 Thread Steve Thompson
Bacula 5.0.2. This is not a problem; just an observation.

I do backups to disk only, using six RAID arrays for storage, totalling 
45TB physical disk. Originally I used six pools and six devices, but ran 
into disk space management issues. This setup was converted to one device 
and one pool per client (about 50). Backups are compressed and TLS is used 
(storage is offsite). No other changes were made: backup throughput 
performance almost exactly doubled.

Steve

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Interesting performance observation

2010-12-06 Thread Steve Thompson
On Mon, 6 Dec 2010, Josh Fisher wrote:

> On 12/5/2010 9:20 AM, Steve Thompson wrote:
>> Bacula 5.0.2. This is not a problem; just an observation.
>>
>> I do backups to disk only, using six RAID arrays for storage, totalling
>> 45TB physical disk. Originally I used six pools and six devices, but ran
>> into disk space management issues. This setup was converted to one device
>> and one pool per client (about 50). Backups are compressed and TLS is used
>> (storage is offsite). No other changes were made: backup throughput
>> performance almost exactly doubled.
>
> If multiple jobs were running simultaneously with the 6 device setup,
> then that would skew the results of a direct job-to-job comparison.

Yes, this is true. However, the jobs themselves, and their schedules, were
not changed; the same concurrency was in place both before and after.

Steve
----
Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Software compression: None

2011-01-18 Thread Steve Thompson

Bacula 5.0.2, CentOS 5.5, x86_64.

I reported this back in November, to no comment. I have a lot of full 
backups that are reporting "Software Compression: None". Software 
compression is most definitely turned on. For example, all of my fileset 
definitions begin in a similar fashion to:

FileSet {
   Name = "FooBarBaz"
   Include {
 Options {
   compression = GZIP
 }
 File = /stuff
   }
}

Whether software compression happens or not seems to be random. Anyone 
know why this is happening?

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-19 Thread Steve Thompson
On Tue, 18 Jan 2011, Dan Langille wrote:

> On 1/18/2011 4:16 PM, Steve Thompson wrote:
>> > Whether software compression happens or not seems to be random. Anyone
>> know why this is happening?
>
> There was a discussion this week about this.  Add Signature to your options.

I will certainly try that, but I'm not sure that this is the whole story, 
and in any event I do not want to have to introduce signatures long-term. 
The issue is that software compression sometimes happens, sometimes not, 
with no changes in any configuration. Looks like a bug to me.

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Martin Simmons wrote:

> It reports "None" if there were no files in the backup or if the compression
> saved less than 0.5%, so it doesn't necessarily mean that it wasn't attempted.

I understand that, but I have several file sets that, for a full backup 
level, sometimes give in the region of 60% compression and sometimes none, 
depending on wind direction. It seems to be about 50/50 whether 
compression is used or not.

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Dan Langille wrote:

> On 1/20/2011 7:24 AM, Steve Thompson wrote:
>> On Thu, 20 Jan 2011, Martin Simmons wrote:
>> 
>>> It reports "None" if there were no files in the backup or if the 
>>> compression
>>> saved less than 0.5%, so it doesn't necessarily mean that it wasn't 
>>> attempted.
>> 
>> I understand that, but I have several file sets that, for a full backup
>> level, sometimes give in the region of 60% compression and sometimes none,
>> depending on wind direction. It seems to be about 50/50 whether
>> compression is used or not.
>
> Time for new eyes.  Post the job emails.

I'm re-running all of the jobs with a signature added. Will post in a 
couple of days when it's done.

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Dan Langille wrote:

> Time for new eyes.  Post the job emails.

One full backup completed. Here are the relevant definitions:

Job {
   Name = "bear_data15"
   JobDefs = "defjob"
   Pool = Pool_bear_data15
   Write Bootstrap = "/var/lib/bacula/bear_data15.bsr"
   Client = bear-fd
   FileSet = "bear_data15"
   Schedule = "Saturday1"
}

FileSet {
   Name = "bear_data15"
   Include {
 Options {
   compression = GZIP
   signature = MD5
   sparse = yes
   noatime = yes
 }
 Options {
   exclude = yes
   wilddir = "*/.NetBin"
   wilddir = "*/.Trash"
   wilddir = "*/.nbs"
   wilddir = "*/.maildir/.spam"
 }
 File = /mnt/bear/data15
   }
}

Storage {
   Name = Storage_bear_data15
   Address = 
   SD Port = 9103
   Password = ""
   Device = Data_bear_data15
   Media Type = Media_bear_data15
   Maximum Concurrent Jobs = 1
   TLS Enable = yes
   TLS Require = Yes
   TLS CA Certificate File = 
   TLS Certificate = 
   TLS Key = 
}

Pool {
   Name = Pool_bear_data15
   Storage = Storage_bear_data15
   Pool Type = Backup
   Recycle = yes
   Recycle Oldest Volume = yes
   Auto Prune = yes
   Volume Retention = 6 weeks
   Maximum Volumes = 300
   Maximum Volume Bytes = 4g
   Label Format = "bear_data15-"
}

and the lastest job e-mail:

19-Jan 22:00 cbe-dir JobId 8749: No prior Full backup Job record found.
19-Jan 22:00 cbe-dir JobId 8749: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
19-Jan 22:30 cbe-dir JobId 8749: Start Backup JobId 8749, 
Job=bear_data15.2011-01-19_22.00.01_22
19-Jan 22:31 cbe-dir JobId 8749: Created new Volume "bear_data15-21646" in 
catalog.
19-Jan 22:31 cbe-dir JobId 8749: Using Device "Data_bear_data15"
19-Jan 22:31 backup1-sd JobId 8749: Labeled new Volume "bear_data15-21646" on 
device "Data_bear_data15" (/mnt/backup1/data5).
...
20-Jan 05:52 backup1-sd JobId 8749: Job write elapsed time = 07:20:18, Transfer 
rate = 14.68 M Bytes/second
20-Jan 05:52 cbe-dir JobId 8749: Bacula cbe-dir 5.0.2 (28Apr10): 20-Jan-2011 
05:52:43
   Build OS:   x86_64-redhat-linux-gnu redhat
   JobId:  8749
   Job:bear_data15.2011-01-19_22.00.01_22
   Backup Level:   Full (upgraded from Incremental)
   Client: "bear-fd" 5.0.2 (28Apr10) 
x86_64-redhat-linux-gnu,redhat,
   FileSet:"bear_data15" 2011-01-19 22:00:01
   Pool:   "Pool_bear_data15" (From Job resource)
   Catalog:"BackupCatalog" (From Client resource)
   Storage:"Storage_bear_data15" (From Pool resource)
   Scheduled time: 19-Jan-2011 22:00:01
   Start time: 19-Jan-2011 22:31:00
   End time:   20-Jan-2011 05:52:43
   Elapsed time:   7 hours 21 mins 43 secs
   Priority:   10
   FD Files Written:   171,826
   SD Files Written:   171,826
   FD Bytes Written:   387,918,677,223 (387.9 GB)
   SD Bytes Written:   387,949,527,809 (387.9 GB)
   Rate:   14636.8 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Accurate:   no
   Volume name(s): 
   Volume Session Id:  124
   Volume Session Time:1295305183
   Last Volume Bytes:  1,695,092,917 (1.695 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  OK
   SD termination status:  OK
   Termination:Backup OK

20-Jan 05:52 cbe-dir JobId 8749: Begin pruning Jobs older than 1 month 5 days .
20-Jan 05:52 cbe-dir JobId 8749: No Jobs found to prune.
20-Jan 05:52 cbe-dir JobId 8749: Begin pruning Jobs.
20-Jan 05:52 cbe-dir JobId 8749: No Files found to prune.
20-Jan 05:52 cbe-dir JobId 8749: End auto prune.

The bacula installation was created from RPM's that I built myself from 
the source RPM (bacula-5.0.2-1.src.rpm), and zlib is included:

#  ldd /usr/sbin/bacula-fd| grep libz
 libz.so.1 => /usr/lib64/libz.so.1 (0x003731c0)

-Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Martin Simmons wrote:

> This will never compress -- the "default" Options clause needs to the last
> one, but you have it as the first one.

Yes, of course you are correct; thank you. And I've even read that in the 
documentation. And moving the default Options clause to the end of the 
Include does result in compression always being used. So all is well now.

I guess I had better not wonder why I've always had it this way, and _was_ 
getting compresssion 50% of the time. Moving on...

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole history

2011-04-21 Thread Steve Thompson
I'm using bacula 5.0.2 on CentOS 5.5. Is there any way (or any other 
version of bacula) that allows one to disable generation of the 
.bconsole_history file?

-steve

--
Fulfilling the Lean Software Promise
Lean software platforms are now widely adopted and the benefits have been 
demonstrated beyond question. Learn why your peers are replacing JEE 
containers with lightweight application servers - and what you can gain 
from the move. http://p.sf.net/sfu/vmware-sfemails
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users