Hi all
I have a problem with fullvirtual backup, I use 3 pool: OffsitePool,
DataSpool and WeeklyPoolTape
the strategy is incremental to WeeklyPoolTape every day, virtualfull to
DataSpool at saturday night after this copy to OffsitePool from
DataSpool and finally migrate back to WeeklyPoolTape
On a clean machine
installed Debian desktop 5.04 via netinstall
edited network ... fixed ip ... added hosts to /etc/hosts
used Synaptic Package Manager to install smbfs and smbclient
verified local network shares are accessible
define 2 network printers and one shared printer to CUPS
verified
On 7/18/2010 8:01 AM, Stan wrote:
On a clean machine …
installed Debian desktop 5.04 via netinstall
edited network ... fixed ip ... added hosts to /etc/hosts
used Synaptic Package Manager to install smbfs and smbclient
verified local network shares are accessible
define 2 network
On 18/07/10 13:01, Stan wrote:
backupServer:/home/stan# bconsole
Connecting to Director localhost:9101
.
. [time passes]
.
18-Jul 06:38 bconsole JobId 0: Fatal error: bsock.c:129 Unable to
connect to Director daemon on localhost:9101. ERR=Connection refused
backupServer:/home/stan#
Did
In my experience, connection refused is unlikely to be firewall. More likely is
connecting to a port where no one is listening (eg. listening or connecting to
the wrong place).
-- Sent from my Palm Pre
On Jul 18, 2010 12:59, Mister IT Guru lt;misteritg...@gmx.comgt; wrote:
On 18/07/10 13:01,
---BeginMessage---
This can be a solution, but I think that this is a bug or an error in the
documentation.
In my case, it is not the best solution. I'm not the administrator of the
server, and in future someone can add other partition mounted on the main
filesystem. The onefs (potentially
I have a fair number of failed jobs in my database, and I would like to
know is it possible to purge all the bad backups, so that all the
diskspace that has been taken up can be released?
This disk space is being replicated across the internet off site, and
the number of GB's that are from
Some of my hosts have very large disks, and a complete full backup will
take over a day (close to 500GB), over the Internet.
To fix this issue, I have split these hosts into smaller file sets, and
I backup the more important data daily, and the fixed data not so much.
The thing is, the jobs
I have a fair number of failed jobs in my database, and I would like to
know is it possible to purge all the bad backups, so that all the
diskspace that has been taken up can be released?
from bconsole, `del jobid=#`.
If you have many, an sql query to look for JOB ID's with FAILED status piped
Hello (Shane?),
Do you have any problem if we change the Bacula project license from GPLv2 +
modifications (OpenSSL exception + Microsoft source restriction exception) to
AGPL + modifications?
The main reason for this is so that if Bacula is used in the cloud, the users
(and thus hopefully
Just to add some info, bacula version 5.0.2 on a Ubuntu Lucid 64bit
*list media
Pool: Scratch
No results to list.
Pool: OffsitePool
+-++---+-+-+--+--+-+--+---+---+-+
| MediaId |
I have a weekly backup set which has a retention time of 13 days.
I have just been asked to keep a set that was written 12 days ago.
Can someone please tell me how I can extend the retention time for this job?
Regards,
Richard
Late last week I had my first run at Bacula on a Linux server with
self-contained storage and an LT04 16 slot tape library (a Dell PV-124T)
connected by a SAS controller. I ran 2.4.4-1 as it was the default under Debian
stable. After a few dry runs and problems I succeeded in backing up just over
On 18/07/10, Dan Langille (d...@langille.org) wrote:
used Synaptic Package Manager to install Bacula 2.4.4-1
I'd go for 5.x if you can. Much improved. But not related to the
problems you're having.
To do this on Debian Stable, please read the instructions here:
See
Late last week I had my first run at Bacula on a Linux server with
self-contained storage and an LT04 16 slot tape library (a Dell PV-124T)
connected by a SAS controller. I ran 2.4.4-1 as it was the default under
Debian
stable. After a few dry runs and problems I succeeded in backing up just
15 matches
Mail list logo