Hi.
Has anyone figured out how to continue a failed job? I have a client that has
gigabytes of data, but very fragile internet connection, so it's almost
impossible to get a normal full backup job.
I thought I'd do it with VirtualFull job to create a successful job out of
failed one and then
Bacula is not intended to delete anything. The way you have a new volume
for a new day with a new uniq date-based name is not a good way for
bacula. You shuld have the volumes with generic names and recycle them
instead of removing them (you do not destroy the tapes after one use
either).
libssl-dev ? (Debian)
J.
2010/1/6 brown wrap gra...@yahoo.com
I cut and pasted four errors I received in trying to build bacula, any
ideas ?:
crypto.c:1226: error: cannot convert ‘unsigned char*’ to ‘EVP_PKEY_CTX*’
for argument ‘1’ to ‘int EVP_PKEY_decrypt(EVP_PKEY_CTX*, unsigned char*,
Hi,
I do some experiments with bacula database and I found that there is no
foreign key constraint between jobmedia and job, jobmedia and media, job
and file and many similar logicaly connected tables. Why? Constraints
prevent arising void references (File with empty job, File with empty
Marek Simon wrote:
Phil Stracchino napsal(a):
I have a disk-based backup setup that uses dated volumes which are used
for a 23-hour period then marked 'used', so that I can be certain a
particular day's backups are contained in a single file. They are of
course purged after their retention
On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see tomorrow,
when it all runs.
Hello !
On 1/5/10 6:40 PM, Phil Stracchino wrote:
Brian Debelius wrote:
I want to see if having disk storage on one sd process, and tape storage
on another sd process, would increase throughput during copy jobs.
Actually, it'll decrease it rather drastically. All the way to none.
Good
Marek Simon wrote:
Bacula is not intended to delete anything. The way you have a new volume
for a new day with a new uniq date-based name is not a good way for
bacula. You shuld have the volumes with generic names and recycle them
instead of removing them (you do not destroy the tapes after
Tino Schwarze schrieb:
On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see
Daniel Holtkamp wrote:
Hello !
On 1/5/10 6:40 PM, Phil Stracchino wrote:
Brian Debelius wrote:
I want to see if having disk storage on one sd process, and tape storage
on another sd process, would increase throughput during copy jobs.
Actually, it'll decrease it rather drastically. All
Hi there,
On Tue, Jan 05, 2010 at 07:30:44PM +0100, Tino Schwarze wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest
On Sun, Jan 3, 2010 at 4:08 PM, Kern Sibbald k...@sibbald.com wrote:
Hello,
The other day, Eric pointed out to me that some of the manuals that we have
posted in different languages are quite out of date -- in fact, the partial
French translation apparently dates from version 1.38.
Hello
On Wed, Jan 06, 2010 at 02:20:06PM +0100, Tino Schwarze wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on
device Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest
I believe they are rated for 250 complete passes.
On 1/6/2010 9:05 AM, Tino Schwarze wrote:
All the same config, just an older tape. :-( I'm not sure, how often
it's been used because of our rather complicted multi-stage backup
scheme (daily/weekly/monthly/yearly pools). Is it possible that
That should not matter. I have read somewhere that Kern said that very
large blocks sizes can waste space, I don not know how this works tho.
On 1/6/2010 8:20 AM, Tino Schwarze wrote:
Is it possible that the low block size of 64k affects tape capacity? It
looks suspicious to me that all
Put a new tape in, and run tapeinfo -f /dev/nst0 and it should report
what block size range your drive can support. Alng with lots of other
useful information.
--
This SF.Net email is sponsored by the Verizon Developer
Is it possible that the low block size of 64k affects tape capacity? It
looks suspicious to me that all tapes end at about the same size...
I have never seen it go below native capacity. I usually get around
1.5 to 1 compression rate on my data with outliners being close to
1.1 to 1 and 5.5
System: Debian Squeeze (testing),
Kernel version 2.6.26 (modified)
Bacula V3.0.2. and MySQL v5.1.41
curlftpfs 0.9.2-1
curl 7.19.7-1
At the moment I test some configurations before I want to go to the live
system.
I want to store the backup files on te ftp-server using curlftpfs,
but I am not
One of the changes in the pipeline seems to be making the SD multi
threaded. Thus if you use a single SD it will be able to use multiple
CPU's going forward. I don't know what the ETA is, but I look forward
to that change.
-Jason
On Tue, 2010-01-05 at 12:56
Is Btape limited to a 10 maximum block size? 999424 works. 1000448
fails.
v3.0.3
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development
No, its Fedora 12.
--- On Wed, 1/6/10, francisco javier funes nieto esen...@gmail.com wrote:
From: francisco javier funes nieto esen...@gmail.com
Subject: Re: [Bacula-users] Trying to compile Bacula
To: brown wrap gra...@yahoo.com
Cc: bacula-users@lists.sourceforge.net
Date: Wednesday, January
I tried to do that years ago but I believe this made all tapes that
were already written to unreadable (and I now have 80) so I gave this
up. With my 5+ year old dual processor Opteron 248 server I get
25MB/s to 45MB/s despools (which measures the actual tape rate) for
my LTO2 drives.
Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
Hi.
Has anyone figured out how to continue a failed job? I have a client
that has gigabytes of data, but very fragile internet connection, so
it's almost impossible to get a normal full backup job.
I thought I'd do it with
Thomas Mueller tho...@chaschperli.ch kirjoitti viestissä
news:hi2ffj$ga...@ger.gmane.org...
Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
Hi.
Has anyone figured out how to continue a failed job? I have a client
that has gigabytes of data, but very fragile internet connection,
Thomas Mueller wrote:
Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
Hi.
Has anyone figured out how to continue a failed job? I have a client
that has gigabytes of data, but very fragile internet connection, so
it's almost impossible to get a normal full backup job.
I thought
Thomas Mueller schrieb:
I tried to do that years ago but I believe this made all tapes that
were already written to unreadable (and I now have 80) so I gave this
up. With my 5+ year old dual processor Opteron 248 server I get
25MB/s to 45MB/s despools (which measures the actual tape
I believe that bacula does not work anymore with curlftpfs (version 0.9.2).
I have found several reports which describe a similar behavior.
https://bugs.launchpad.net/ubuntu/+source/curlftpfs/+bug/367091
http://sourceforge.net/projects/curlftpfs/forums/forum/542750/topic/3295831
Finally I found
I have an MSL2024 library I have been battling to get running for the
last couple of days and am about out of ideas. It is the parallel SCSI
attached LTO-4 version, connected to an HP LSI based HBA.
I have bacula configured and it passes the mtx-changer test commands
recommended in the manual
On Wed, Jan 6, 2010 at 5:16 PM, Richard Scobie rich...@sauce.co.nz wrote:
I have an MSL2024 library I have been battling to get running for the
last couple of days and am about out of ideas. It is the parallel SCSI
attached LTO-4 version, connected to an HP LSI based HBA.
I have bacula
You probably need to add a wait in the mtx-changer script just after
the load. This wait will make sure the tape is in the drive and has
completed the loading process. For some systems the script does not
wait long enough for the tape drive to finish.
John
Thanks John. I already have a 30
Hi,
On Tue, Jan 5, 2010 at 12:36 PM, Javier Barroso javibarr...@gmail.com wrote:
On Tue, Jan 5, 2010 at 12:18 PM, John Drescher dresche...@gmail.com wrote:
On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso javibarr...@gmail.com wrote:
Hi people,
First, I'm using an old bacula version (etch
Hi again,
On Thu, Jan 7, 2010 at 12:27 AM, Javier Barroso javibarr...@gmail.com wrote:
Hi,
On Tue, Jan 5, 2010 at 12:36 PM, Javier Barroso javibarr...@gmail.com wrote:
On Tue, Jan 5, 2010 at 12:18 PM, John Drescher dresche...@gmail.com wrote:
On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso
On 01/06/2010 05:40 PM, brown wrap wrote:
I tried compiling it, and received errors which I posted, but didn't
really get an answer to. I then started to look for RPMs. I found the
client rpm, but not the server rpm unless I don't know what I'm looking
for. Can someone point me to the rpms I
Richard Scobie wrote:
You probably need to add a wait in the mtx-changer script just after
the load. This wait will make sure the tape is in the drive and has
completed the loading process. For some systems the script does not
wait long enough for the tape drive to finish.
John
Thanks
Hello,
I'm having difficulty compiling Bacula 3.0.3 on Solaris 10 x86. The build
fails at the same spot whether I'm using gcc or Sun Studio's compiler. I'm
linking against the 32-bit binary build of MySQL5 from OpenCSW. My build
environment is:
LDFLAGS=-L/opt/sunstudio12.1/lib
With
Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144
I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
Size gave me some extra MB/s, I think it's as important as the
Maximum Block Size.
thanks for providing this
On Wednesday 06 January 2010 19:30:42 Timo Neuvonen wrote:
Thomas Mueller tho...@chaschperli.ch kirjoitti viestissä
news:hi2ffj$ga...@ger.gmane.org...
Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
Hi.
Has anyone figured out how to continue a failed job? I have a client
37 matches
Mail list logo