Hello,
On 5/15/2007 10:56 PM, Peter Crighton wrote:
> I am using conf files based on the defaults, with times changed. To
> summarise, my backups are:
>
> Linux server "home" files daily at 11:00
> Linux server shares (i.e. my key files shared to other PCs) daily at
> 11:00
> Above on a daily inc
Thanks for the responses, Mike and Arno.
It went easily and trouble-free enough, given the complexity and
inter-relation between the config files that life on multiple hosts.
On Monday 14 May 2007 4:28:23 pm Arno Lehmann wrote:
> > I've been using mySQL. I know how to get a dump of the database
Hi,
On 5/15/2007 7:29 PM, Naufal Sheikh wrote:
> Hey,
>
> thanks for your help. Just few clarifications. For the "Max Wait Time",
> the time specified is in min or sec? Or can I just write 25 min or 300
> sec.
Hmm. I'm unsure... there was some problem with these times a while ago
where it di
I am using conf files based on the defaults, with times changed. To
summarise, my backups are:
Linux server "home" files daily at 11:00
Linux server shares (i.e. my key files shared to other PCs) daily at
11:00
Above on a daily incremental / weekly differential / monthly full
cycle
Windows client
Hi,
On 5/15/2007 9:52 PM, Jordan Desroches wrote:
> So it doesn't look like the MySQL process is tanking the system at all.
> I increased cache sizes in MySQL with no effect, along with buffer sizes
> in sysctl.conf . I also tried using SQLite , and got marginally better
> throughput. Has anyo
Jordan Desroches schrieb:
> So it doesn't look like the MySQL process is tanking the system at all. I
> increased cache sizes in MySQL with no effect, along with buffer sizes in
> sysctl.conf . I also tried using SQLite , and got marginally better
> throughput. Has anyone seen significantly >30 MB
So it doesn't look like the MySQL process is tanking the system at all. I
increased cache sizes in MySQL with no effect, along with buffer sizes in
sysctl.conf . I also tried using SQLite , and got marginally better
throughput. Has anyone seen significantly >30 MB/s over a gig line on
backup?
Th
Hi,
I recently built bacula from source (v2.0.3) on RHEL4 ES. Here's the
configuration options I specified at build-time:
$ ./configure --prefix=/usr --sbindir=/usr/sbin --sysconfdir=/etc/bacula
--with-scriptdir=/etc/bacula --enable-smartalloc --with-postgresql
--with-working-dir=/var/bacula --w
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Andy Collyer wrote:
>> Hello,
>>
>> On 5/11/2007 10:54 AM, Andy Collyer wrote:
> I want to *always* automatically re-use my tapes. I can
achieve this
> manually by using bconsole's SQL query mode and updating
the Media table:
> u
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Kern Sibbald wrote:
> Hello,
>
> On Thursday 10 May 2007 02:20, Berend Dekens wrote:
>> Arno Lehmann wrote:
>>> Hi,
>>>
>>> On 5/9/2007 10:27 PM, Christopher Schwerdt wrote:
>> What about status icons?
> Yes, great idea. We would like to use
Hey,
thanks for your help. Just few clarifications. For the "Max Wait Time", the
time specified is in min or sec? Or can I just write 25 min or 300 sec.
Secondly if i want this to be global parameter for all the jobs, I should be
mentioning it in the jobdefs right?
Thanks
On 5/14/07, Arno Lehm
Hello Adam,
I'm having an "Always Open = No" and an "Automount = yes" in my
bacula-sd.conf so when bacula needs a tape he automatically mounts the
right slot and gives it back when its not needed anymore.
With this I have no need to umount the drive from the bacula-console to
change my tapes in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This parameter is deprecated. MaximumVolumeJobs = 1 is the new terminology.
Sounds like you need an admin job that purges whatever tape happens to
be in the drive. I assume you're doing full backups.
I would strongly advise against this, but you gott
In response to Jordan Desroches <[EMAIL PROTECTED]>:
> Compression is not enabled in the FileSet.
Can you isolate where the bottleneck is? IO? CPU? Network?
There are frequently discussions about throughput being sub-optimal on
the lists. In my experience, these fall into a few categories:
On Tue, 15 May 2007, Michel Meyers wrote:
> Just a guess/question: Do you have compression enabled in your job? If
> the client's doing compression, that might throttle its throughput.
Another thing to consider is the speed of the spool disk(s).
I had to stripe the spool across several dedicated
Hi Jordan,
in my experience you will get better perfomance on a single client if you
turn spooling off.
--
Ferdinando Pasqualetti
G.T.Dati srl
Tel. 0557310862 - 3356172731 - Fax 055720143
"Jordan Desroches" <[EMAIL PROT
Compression is not enabled in the FileSet.
Thanks,
Jordan
Michel Meyers wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jordan Desroches wrote:
>> Greetings!
>>
>> First, I apologize if this comes through multiple times. I'm having
>> trouble connecting to the list.
>>
>> I've been
Hi,
I've installed a self-compiled RPM for SUSE professional 9.1 on x86. It
has been working reasonably well, but lately I've gotten a few crashes.
One solid lockup, and last sunday (during the differential) I got the
attached crash.
its a P4-2.4GHz machine. I'm backing up to external USB Hard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jordan Desroches wrote:
> Greetings!
>
> First, I apologize if this comes through multiple times. I'm having
> trouble connecting to the list.
>
> I've been trying to bake off AMANDA and Bacula in our environment, and
> have run up against a Bacula p
Is "Use Volume Once = yes" in your Pool definition what you mean?
Cheers,
Andy
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf
> Of Jarrod Meyer
> Sent: 15 May 2007 12:12
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] Volume pur
Greetings!
First, I apologize if this comes through multiple times. I'm having
trouble connecting to the list.
I've been trying to bake off AMANDA and Bacula in our environment, and
have run up against a Bacula performance snag. Amanda is regularly able
to average ~50 MB/s over our network, whil
I have been trying to get Bacula to do one thing for some time now, and
cannot come right even after reading most of the manual (well, at least
some), searching the mailing lists, and of course Google.
The Setup: I have a set of volumes for each of my sites and each night a
FULL backup of the incl
Hello,
Several users have reported that the Director consumes huge amounts of memory
when running large numbers of simultaneous jobs. Actually, the Director
needs something like between 80-100 K per job depending on the exact timing
of that job. This amount of memory per job is really quite s
On Tuesday 15 May 2007 08:44, MaxxAtWork wrote:
> On 5/14/07, Kern Sibbald <[EMAIL PROTECTED]> wrote:
> > OK, I have included a screen shot of the current Bacula. One could easily
> > start on the tool bar icons. From left to right, the icons are:
> >
> > - Connect (connect to director) -- there
24 matches
Mail list logo