I have been using Bacula on a Debian box (in a windows xp network) for
a while now, and it's working great, except that the backup volume
(stored on the hard disk) is getting to big.
I have since learned that all the purging options in bacula only
remove files and jobs from the bacula database, an
The list of databases includes four, the significant one says:
bacula postgres UTF8
Does the fact that the bacula database shows "postres" as the owner
suggest a solution?
Van
Purcocks, Graham wrote:
>Sounds like the database bacula doesn't exist
>
>Try
>
>psql -l
>
>To see what databases
Hi All (Hi Martin)
I already have the override (in my case, repeated for wed, thu, fri)
Run = Level=Differential Storage=SonyAIT tue at 22:30
But Bacula still wants to upgrade my backup to a Full Backup. With so much
data (>150gb) this will fill two-or-more AIT2 tapes. Hence I would like a U
On 11/3/06, Jaap Stolk <[EMAIL PROTECTED]> wrote:
> I was wondering if i could do without the differential backups altogether ?
(in reply to my own post)
I did some more reading and found that the differential backup only
looks at the file date/time, exactly like the incremental backup. so
this is
Alex Chekholko wrote:
> On Fri, 03 Nov 2006 01:13:32 +1100
> Gerard Sharpe <[EMAIL PROTECTED]> wrote:
>
>
>> Hi,
>>
>> I am a first time user of Bacula trying to get a Sun L280 DLT7000 tape
>> drive (operating mode set to random) with autochanger working.
>>
>> I have confirmed the drive works
Hi,
On 11/3/2006 7:53 AM, Berner Martin wrote:
> Hy
> We plan to use a Tape changer for SDLT-Tabes.
> It have to be able to store about 10 Tab's.
> Is there any one who can recommends me a Tape changer who works fine with
> Bacula?
>
> Server is a Sun Solaris 10 and runs Bacula 1.38.9.
I assume
Gerard Sharpe wrote:
> Alex Chekholko wrote:
>
>> On Fri, 03 Nov 2006 01:13:32 +1100
>> Gerard Sharpe <[EMAIL PROTECTED]> wrote:
>>
>>
>>
>>> Hi,
>>>
>>> I am a first time user of Bacula trying to get a Sun L280 DLT7000 tape
>>> drive (operating mode set to random) with autochanger work
> On Fri, 3 Nov 2006 11:27:25 +0100, Jaap Stolk said:
>
> On 11/3/06, Jaap Stolk <[EMAIL PROTECTED]> wrote:
> > I was wondering if i could do without the differential backups altogether ?
>
> (in reply to my own post)
> I did some more reading and found that the differential backup only
> loo
> On Fri, 3 Nov 2006 09:46:48 +, Alan To said:
>
> Hi All (Hi Martin)
>
> I already have the override (in my case, repeated for wed, thu, fri)
> Run = Level=Differential Storage=SonyAIT tue at 22:30
>
> But Bacula still wants to upgrade my backup to a Full Backup. With so much
> data
Hi All (Hi Martin)
I cannot have one Job (configured) pointing to more then one Storage (can I?)
and I therefore have two separate Jobs listed.
E.g.
Job A
Name = "FullJob"
Level = "Full"
Storage = "USB2"
Schedule = "MonSch"
Job B
Name = "DiffJob"
Level = "Differential"
Storage = "SonyAIT"
Sch
On Nov 2, 2006, at 08:30, Robert Nelson wrote:
Landon,
I've changed the code so that the encryption code prefixes the data
block
with a block length prior to encryption.
The decryption code accumulates data until a full data block is
decrypted
before passing it along to the decompressio
Hi Alan,
a job definition allows one Storage directive only, of course.
All directives are keywords, thus they must be unique. But,
within a Schedule definition, you may change the storage for
the scheduled job. Example:
Schedule {
Name = "Foo"
Run = Full Storage="Tape" Pool="
-- Forwarded message --From: John Drescher <[EMAIL PROTECTED]>Date: Nov 3, 2006 8:41 AM
Subject: Re: [Bacula-users] Getting startedTo: "G. Armour Van Horn" <[EMAIL PROTECTED]>
On 11/3/06, G. Armour Van Horn <[EMAIL PROTECTED]> wrote:
The list of databases includes four, the signifi
On Nov 2, 2006, at 13:22, Robert Nelson wrote:
The problem is that currently there are three filters defined:
compression,
encryption, and sparse file handling. The current implementation of
compression and sparse file handling both require block boundary
preservation. Even if zlib streamin
On Nov 1, 2006, at 23:25, Michael Brennen wrote:
On Wed, 1 Nov 2006, Robert Nelson wrote:
On top of the issue with the reversed processing during restore
that I
previously mentioned, there is a fundamental flaw in the
processing of
compressed+gzipped data. The problem is that boundaries a
Bacula seem not to understand that today is week 44 of the year.
It seemed to understand that last week was week 43, as it ran the
job that was scheduled for that week. Here's the schedule that
applies:
Run = Level=Full Pool=OffSite Storage=Ultrium thursday
w03,w07,w11,w15,w19,w23,w27,w31,w35
Hi Arno,And thanks for your reply.That didn't help though.. Are they supposed to be same? Since the error is that it cannot find a specific device, it should mean that the storage deamon responds, right?Is it correct to configure the way i have?
//NicklasOn 11/2/06, Arno Lehmann <
[EMAIL PROTECTE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Wrong way to do it. Files are listed by job, and these are two different
jobs.
See the other messages that include defining storage in the "Schedule"
resource. DO NOT define more than one job.
Alan To wrote:
> Hi All (Hi Martin)
>
> I cannot have
On Nov 3, 2006, at 6:42 AM, Gerard Sharpe wrote:
> Gerard Sharpe wrote:
>> Alex Chekholko wrote:
>>
>>> On Fri, 03 Nov 2006 01:13:32 +1100
>>> Gerard Sharpe <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>
Hi,
I am a first time user of Bacula trying to get a Sun L280
DLT7000 tape
These two commands apparently went through as root, they fail as
postgres. And yes, it was "postgres" reported by the "psql -l" command,
the "postres" was a typo.
[EMAIL PROTECTED] bacula]# ./make_postgresql_tables -U postgres
psql: FATAL: Ident authentication failed for user "postgres"
Creat
> On Fri, 03 Nov 2006 14:10:47 +0100, Robert Wirth said:
>
> Hi Alan,
>
> a job definition allows one Storage directive only, of course.
> All directives are keywords, thus they must be unique. But,
> within a Schedule definition, you may change the storage for
> the scheduled job. Exampl
On Nov 2, 2006, at 16:29, Robert Nelson wrote:
In that case, would you like me to commit the code I have?
That'd be super. Thanks for fixing it.
I agree about reworking the stream implementation. The existing
code could
be written as a number of filters: gzip, openssl, sparse, block/
deb
> Landon,
>
> I've changed the code so that the encryption code prefixes the data block
> with a block length prior to encryption.
>
> The decryption code accumulates data until a full data block is decrypted
> before passing it along to the decompression code.
>
> The code now works for all four
This code is backwards compatible for everything except encrypted data.
Previously compressed backups will still work fine.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Kern
Sibbald
Sent: Friday, November 03, 2006 4:15 PM
To: Robert Nelson
Cc: [EMAIL PR
> The problem is that currently there are three filters defined:
> compression,
> encryption, and sparse file handling. The current implementation of
> compression and sparse file handling both require block boundary
> preservation. Even if zlib streaming could handle the existing block
> based
> This code is backwards compatible for everything except encrypted data.
> Previously compressed backups will still work fine.
I'm not 100% sure what you mean, but here are my thoughts:
If it breaks something that previously worked, then it is does not fit
with the Bacula philosophy of always b
I guess it comes down to the definition of "previous versions". If you
exclude previous development versions (ie 1.39.x) then it is backwards
compatible since the problem and the fix only affect encrypted data which,
as far as I know, wasn't available in 1.38.x.
-Original Message-
From:
On Friday 03 November 2006 18:39, Kern Sibbald wrote:
> > This code is backwards compatible for everything except encrypted data.
> > Previously compressed backups will still work fine.
>
> I'm not 100% sure what you mean, but here are my thoughts:
>
> If it breaks something that previously worked,
> I guess it comes down to the definition of "previous versions". If you
> exclude previous development versions (ie 1.39.x) then it is backwards
> compatible since the problem and the fix only affect encrypted data which,
> as far as I know, wasn't available in 1.38.x.
Ah, good point. Yes, I t
Perhaps if I explained the problem:
Currently (as of 1.39.27)
No filters = Works fine
Sparse = Works fine
Compression = Works fine
Encryption = Works fine
Sparse + Compression = Restore broken
Sparse + Encryption = Restore broken
Sparse + Compression + Encryption = Restore broken
Compression + E
I've got to go now. Thanks for the explaination.
I'll respond as soon as I can, but I would also like to see Landon's
response.
> Perhaps if I explained the problem:
>
> Currently (as of 1.39.27)
>
> No filters = Works fine
> Sparse = Works fine
> Compression = Works fine
> Encryption = Works fin
Alex Chekholko wrote:
>
> On Nov 3, 2006, at 6:42 AM, Gerard Sharpe wrote:
>
>> Gerard Sharpe wrote:
>>> Alex Chekholko wrote:
>>>
On Fri, 03 Nov 2006 01:13:32 +1100
Gerard Sharpe <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am a first time user of Bacula trying to ge
32 matches
Mail list logo