in the past, I have had problems with restores failing due to the device
for differential volumes can't find the volumes stored by the
incremental.
I also have not been quite satisfied with having all the files in one
large directory without a structure, e.g., when I have different
retention polic
Here's version 1.1 of my bacula-du tool which outputs the contents of
a Bacula job like du(1) would do.
The main change since 1.0 (which was forwarded here by Phil Stracchino
yesterday) is that jobs from Windows clients are handled properly
(thanks, Bob Hetzel!).
(The main lacking feature is pars
Paul Davis writes:
> You are probably correct, given that current PRML encoding techniques
> and the given density of magnetic media (both drive and tape). The
> NIST specification even suggests that a single pass of random data
> would likely be sufficient (at least for drives, nothing is said of
Jesper Krogh writes:
> I have a feature request, that is more a "usabillity issue" than
> anything else. Should be trivial to implement if people think it is
> equally important as I do.
>
> Item ?: On restore, check space on target volume
no, it's not trivial to do, at least not accuratel
Jesper Krogh writes:
> What: Currently a restore defaults to "yes" in the confirm dialog.
> This is suggested to change the default to "no" or better "no
> defaults" but require operator intervention.
+1
> Why: In our production environment we have filesystems with
> mi
Dan Langille writes:
> Kjetil Torgrim Homme wrote:
>> Stephen Thompson writes:
>>> I sort mine within my configuration files and consequently they're
>>> sorted in the console views.
>>
>> some lists are ordered by ClientId (ie. chronological by when
Stephen Thompson writes:
> I sort mine within my configuration files and consequently they're
> sorted in the console views.
some lists are ordered by ClientId (ie. chronological by when they
were added to the database), others are sorted by the ordering in the
configuration files.
--
Kjetil T
[Kern Sibbald]:
>> [Chandranshu]:
>> > [I] modified the code in src/cats/mysql.c to print the error
>> > message returned by mysql_error(). Then, I compiled and ran the
>> > code to see the most dubious error in my DBA career:
>> > Error 2002 (HY000): Can't connect to local MySQL server through
>>
Kern Sibbald writes:
> Yes, I am suggesting that all distros should use the Bacula
> recommended configuration. We can then automate a lot of nice
> stuff.
why not just use --prefix=/opt/bacula ?
the installed files will still be a simple collection of files. I
don't think there needs to be a
Solaris doesn't ship static libraries, but the test for compatibility
unconditionally looks for the non-existent .a file, so it refuses to
enable it. here's a patch for autoconf (relative to trunk, revision
8600).
Index: autoconf/bacula-macros/db.m4
===
Arno Lehmann <[EMAIL PROTECTED]> writes:
> And I strongly recommend to not measure throughput using
> /dev/zero... if you use compression, you're almost only measuring
> bus throughput as all the zeros will be compressed away... it's
> better to prepare a big file with random data for it. Unfortu
Marc Schiffbauer <[EMAIL PROTECTED]> writes:
> * Kjetil Torgrim Homme schrieb am 15.10.08 um 13:07 Uhr:
>> you will only have to traverse all filesystems if OneFS is false.
>> sure, it may be slightly tricky to implement a clever cutoff. e.g.,
>> with
>>
>>
Kern Sibbald <[EMAIL PROTECTED]> writes:
> 3. Your use of the Exclude {} section, in for example:
>> Exclude {
>> File = /tmp
>> FileRegex = "/home/*/[Cc]ache"
>> }
>
> won't work (it seems to me) unless there are some implicit
> assumptions that I am not aware of. How can you apply an
John Huttley <[EMAIL PROTECTED]> writes:
>
> http://wiki.bacula.org/doku.php?id=wiki:playground
| "ctime Timestamp [...] The datetime it was created."
ctime is inode change time on Unix clients, and create time on Windows
clients. this difference alone makes the value of the field suspect.
--
Kern Sibbald <[EMAIL PROTECTED]> writes:
> I think the best suggestion that I have seen for the name is (at
> least in the current context):
>
> Exclude Dirs Containing = .no_backup
>
> That seems to me to be a very good name.
I agree.
> Concerning the placement of the directive: I think it is wo
"David Boyes" <[EMAIL PROTECTED]> writes:
>> Bacula *must* read the entire file in order to back it up, so the
>> exact byte count is known with no extra cost.
>
> Umm, not on the OSes I mentioned. If you fstat the file or read the
> directory inode with Unix compatibility on, the underlying OS re
Kern Sibbald <[EMAIL PROTECTED]> writes:
> There is one important point that I would like to bring up, and that
> is that Bacula writes the attributes record (which contains the
> LStat) before it backs up the file (i.e. before it reads the file).
> This is because on the restore side, the File da
"David Boyes" <[EMAIL PROTECTED]> writes:
> Partially, but I've been working on USS on z/OS and OpenVMS Bacula
> clients, where the filesystems are block oriented rather than byte
> oriented. One *can* obtain a precise dataset size in bytes, but the
> cost is reading the entire file to determine w
Bill Moran <[EMAIL PROTECTED]> writes:
> My understanding of what happens is this:
> 1) Job runs and creates a filename entry for a filename not used before
>(let's say something really unique, such as would be created by
>mktemp)
> 2) Time goes by and that file is deleted, never to be seen
Kjetil Torgrim Homme <[EMAIL PROTECTED]> writes:
> I wanted to make a du(1) style report for Bacula, but was a bit
> surprised to see that this information is not readily available in the
> File table -- it's encoded as quasi-base64 in the LStat column. I
> modified b
I wanted to make a du(1) style report for Bacula, but was a bit
surprised to see that this information is not readily available in the
File table -- it's encoded as quasi-base64 in the LStat column. I
modified base64.sql[1] to support Bacula's format, but it's running
too slow to be useful, ie. le
Kern Sibbald <[EMAIL PROTECTED]> writes:
> No, match_bsr.c is called for each record. You might take another
> look at it and see if it would be possible to move the code from
> read_record.c to match_bsr.c -- even if it takes a new subroutine
> call. I haven't had a chance to look at that aspec
Kern Sibbald <[EMAIL PROTECTED]> writes:
> - As Martin points out, this code gives the SD a bit more knowledge
> of the records it has stored, but unless someone has a better idea,
> I see no alternative.
the SD has this knowledge already, even if it ignores it.
> - One aspect of this code I have
I needed to restore a subset of some old backups. Restoring the full
backups would need a terabyte of temporary storage, which seemed a bit
wasteful (and inconvenient to get hold of) since the data I was
interested in took less than a gigabyte.
Anyway -- I implemented a simple regex to filter the
Frank Sweetser <[EMAIL PROTECTED]> writes:
> Translations: looks *very* nice. In particular, the friendly web
> interface makes it look like a good way to let people contribute
> without digging into C++.
unfortunately, it interoperates very badly with other tools, so it
forces all translators t
(269 + 58 lines of quotes trimmed -- hint hint :-)
Hemant Shah <[EMAIL PROTECTED]> writes:
> --- On Wed, 8/6/08, Kern Sibbald <[EMAIL PROTECTED]> wrote:
>> I would recommend that you talk to Veritas about the problem
>> without mentioning Bacula. Unless you configure Bacula
>> differently, it che
"John Drescher" <[EMAIL PROTECTED]> writes:
> Kjetil Torgrim Homme <[EMAIL PROTECTED]> wrote:
>> "Shad L. Lords" <[EMAIL PROTECTED]> writes:
>>> Item 1: Allow tape drives to be associated with autochanger
>>> resource
"Shad L. Lords" <[EMAIL PROTECTED]> writes:
> Item 1: Allow tape drives to be associated with autochanger
> resource on different host
> Note: A single robot controls multiple drives but is only connected
> to one machine. It is currently possible to work around
>
Kjetil Torgrim Homme <[EMAIL PROTECTED]> writes:
> This leads to my proposal: make the priority override option an
> attribute for each Job. This way an installation can turn it on
> via JobDefs, and turn it off for the Catalog backup. The attached
> patch adds the keyword
I've been looking at the logic associated with what priorities get run
first. Here's the scenario I want to improve the behaviour for:
Eight backup jobs are running at priority 10, and several more
queued (also at priority 10). The number eight reflects the max
concurrenc
30 matches
Mail list logo