On 28/05/12 05:28, Rodrigo Renie Braga wrote:
My ideal solution which Bacula, apparently, DOES NOT SUPPORT: make the
Address option on the Storage resource optional, meaning that if not
specified, the SD Address that Bacula will send to the FD is the same one
that the Director used to connect
2012/5/28 Alan Brown a...@mssl.ucl.ac.uk
I have a similar problem. Setting the non-FQDn IPs required in /etc/hosts
does the trick.
I though about that at first too, but like I said, there are some VLANs
that I have no control of, and also depending on local static configuration
of 300+
Hello,
I've just started using Bacula having installed the RPMs from the Scientific
Linux distro. I'm trying to back up several machines to disk file on the Bacula
server and am experiencing two odd problems:
i) disk volumes created by Bacula are changing their state to Error with the
message
Are you doing compression or encryption? And if so which? If the files are
large I found that those can really slow down the transfers.
---Guy
(via iPhone)
On 19 May 2012, at 01:00, Graham Worley g.wor...@bangor.ac.uk wrote:
Hello,
I've just started using Bacula having installed the RPMs
Hi,
On a Bacula installation with a CentOS 6.2 (64 bit) server
running Bacula 5.2.6 director and SD with an HP LTO3 autoloader
and backing up some twenty-odd machines of various platforms,
all the jobs report User specified spool size reached
much too early, currently after 1.45 GB instead
I've just started using Bacula having installed the RPMs from the Scientific
Linux distro. I'm trying to back up several machines to disk file on the
Bacula server and am experiencing two odd problems:
i) disk volumes created by Bacula are changing their state to Error with the
message
On 28/05/12 14:38, Sean Cardus wrote:
Scanning the log it looks like the storage daemon decreases its
spool size limit every time a backup job crashes. The last time
it used the full 50 GB was here:
I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD
5.0.3) for quite
Hi
I'm receiving this message when trying to restore a backup do an USB Disk.
Space on disk is not the problem, also the disk inodes are 99% free. The
disk contains an ext3 filesystem and out of bconsole i can write/read with
no problem.
I'm trying to restore a backup made by bacula to the disk.
On 05/28/2012 12:44 PM, Luis H. Forchesatto wrote:
Hi
I'm receiving this message when trying to restore a backup do an USB
Disk. Space on disk is not the problem, also the disk inodes are 99%
free. The disk contains an ext3 filesystem and out of bconsole i can
write/read with no problem.
hi..
I am using Bacula to back up all the servers in my office. That’s automated
and works fine..
I am using a NAS and an Autoloader. Nightly backups are write to NAS and
then to Tape (copy jobs).
Now the problem is, I want to restore the backups in a remote server to
check whether the
I don't think you can do this. You have to restore to the FD not the director.
If you plug the USB disk into the FD, then mount the disk there. Then I think
you'll get what you want.
Or else, setup and run another FD on the same server as the director. Then
restore to the new FD.
Bryan
On Mon, May 28, 2012 at 1:08 PM, Bryan Harris bryanlhar...@gmail.com wrote:
I don't think you can do this. You have to restore to the FD not the
director. If you plug the USB disk into the FD, then mount the disk there.
Then I think you'll get what you want.
Or else, setup and run
How about deleted files held open by other running processes? I wonder if an
open file that was deleted is using up some of your disk space?
lsof | grep -i deleted
Bryan
On May 28, 2012, at 12:18 PM, John Drescher wrote:
On Mon, May 28, 2012 at 1:08 PM, Bryan Harris bryanlhar...@gmail.com
-- Forwarded message --
From: John Drescher dresche...@gmail.com
Date: Mon, May 28, 2012 at 1:20 PM
Subject: Re: [Bacula-users] SD Bug User specified spool size reached
To: Sean Cardus scar...@zebrahosts.net
On Mon, May 28, 2012 at 9:38 AM, Sean Cardus scar...@zebrahosts.net
Am 28.05.2012 17:44, schrieb Alan Brown:
On 28/05/12 14:38, Sean Cardus wrote:
Scanning the log it looks like the storage daemon decreases its
spool size limit every time a backup job crashes. The last time
it used the full 50 GB was here:
I've been seeing exactly the same behaviour here
Am 28.05.2012 19:21, schrieb John Drescher:
On Mon, May 28, 2012 at 9:38 AM, Sean Cardus scar...@zebrahosts.net wrote:
On a Bacula installation with a CentOS 6.2 (64 bit) server
running Bacula 5.2.6 director and SD with an HP LTO3 autoloader
and backing up some twenty-odd machines of various
I have seen that in the past like 3 times in 10 years (around 30
thousand completed jobs) but in no way is that is not normal. I would
look for an updated version of bacula. 5.26 is the current version
Am building 5.2.6 by hand now as 5.0.0 was the latest in the Scientific
Linux repository.
17 matches
Mail list logo