Rory Campbell-Lange wrote:
> I've enabled hardware compression and throughput speed has improved to
> around 80MB/sec which is better but still not fantastic. It is coming
> off disk at around 90-100MB/sec so the diffence must be partly down to
> the hw compression.
On my LTO4 I see variations in
Hi Richard
On 16/07/10, richard (rich...@sauce.co.nz) wrote:
> Rory Campbell-Lange wrote:
>
> > Currently I'm getting a throughput of around 10MB/sec to tape, which
> > seems pretty low. iotop is showing reading off disk at around
> > 17-20MB/sec and one of the 8 cores on this Xeon E5520 (2267MHz
Rory Campbell-Lange wrote:
> Currently I'm getting a throughput of around 10MB/sec to tape, which
> seems pretty low. iotop is showing reading off disk at around
> 17-20MB/sec and one of the 8 cores on this Xeon E5520 (2267MHz) is
> running at 100%. Hardware compression is disabled on the tape cha
Hi. I'm busy writing a test 3TB volume off fairly slow storage on a
local machine to a SAS connected DELL PV-124T tape changer with 1 drive
and (I think) 8 slots. The local storage should be able to sustain well
over 100MB/s reads.
I'm running my test backup with GZIP compression enabled.
Current
Hi. I'm running Bacula 2.4.4 (28 December 2008) on Debian stable while
I'm testing Bacula.
I'm unclear to me how I can load the bacula module from python.
Many thanks
Rory
--
Rory Campbell-Lange
r...@campbell-lange.net
---
On 07/15/10 14:07, May, John wrote:
> I am using Bacula 5.0.2 using MySQL on Centos 5.4 x64. I'm trying to
> restore some files from a backup I made a couple months ago. I the restore
> is about 1.5 TB in size and spans two LTO4 tapes. I can start the restore
> just fine and the first 5GB fl
I looked into the duplicate jobs options and according to the
documentation the default is to not allow duplicate jobs. So why am I
seeing duplicate jobs queued? In this case I have a copy to tape job:
Job {
Name = "CopyToTape"
Type = Copy
#Schedule = "WeeklyCycleAfterBackup"
Priori
On 07/06/2010 05:15 PM, Jon Schewe wrote:
> Anyone seen this error before?
> Error: xattr.c:707 extattr_list_link error on file "/boot":
> ERR=Undefined error: 0
>
> I did a quick search on Google and only got foreign sites. The client is
> a NetBSD machine and the server is Linux.
>
>
Anyone el
> Has your tape drive drive passed all the btape tests?
Yes, it did. I ran the tests again, just to make sure, and it is all ok.
FWIW, this is an Arcvault48 Autochanger library with 2 HP Ultrium LTO4 drives.
John May
Fugro Horizons, Inc.
john@fugrohorizons.com
-Original Message-
On 7/15/2010 3:05 PM, May, John wrote:
>> Are you restoring a few files from a backup? An entire backup?
>
> I am restoring the entire backup, the full two tapes worth of data.
If you were selecting a few files, here and there, I would look at your
skip forward feature on your tape drive and ens
Rory Campbell-Lange wrote:
> I'd be grateful to know if the Bacula catalogue records allow Bacula to
> more rapidly restore data from large tape sets by understanding which
> tape it is stored upon.
Absolutely. We are doing 12+ TB backups to LTO4 and restoration of a
single file is extremely eff
>Are you restoring a few files from a backup? An entire backup?
I am restoring the entire backup, the full two tapes worth of data.
>What command did you issue?
# bextract -c /etc/bacula/bacula-sd.conf -V 11L4\|21L4 /dev/nst0
/restores
John
-Original Message-
From: Dan Langil
Yes!! No doubt!!!
Thank's again my friend!!!
Best regards from Uruguay!!
--Mensaje original--
De: Josh Fisher
Para: Sebastián Moreno
CC: bacula-users@lists.sourceforge.net
Asunto: Re: [Bacula-users] Bacula | NAS200 - Linksys
Enviado: 15 Jul, 2010 16:03
On 7/15/2010 2:56 PM, Sebastián
On 7/15/2010 2:07 PM, May, John wrote:
> I am using Bacula 5.0.2 using MySQL on Centos 5.4 x64. I’m trying to
> restore some files from a backup I made a couple months ago. I the
> restore is about 1.5 TB in size and spans two LTO4 tapes. I can start
> the restore just fine and the first 5GB flies
On 7/15/2010 2:56 PM, Sebastián Moreno wrote:
> Josh:
>
>
> Thank's a lot!! Your suggestion work's perfectly! I mounted the share
> folder with this line:
>
> mount -t cifs //192.168.150.142/backup -o
> username=bacula,password=some_passrw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,fo
is my line to mount the folder:
>>>>>>
>>>>>> mount -t cifs //192.168.150.142/backup -o
>>>>>> username=bacula,password=some_pass,rw,bg,vers=3,proto=tcp,hard,intr,rsize=
>>>>>> 32768,wsize=32768,forcedirectio,llock /backup/N
I am using Bacula 5.0.2 using MySQL on Centos 5.4 x64. I'm trying to restore
some files from a backup I made a couple months ago. I the restore is about
1.5 TB in size and spans two LTO4 tapes. I can start the restore just fine and
the first 5GB flies by in a minute or so, then the restore s
to mount the folder:
>>>>>>
>>>>>> mount -t cifs //192.168.150.142/backup -o
>>>>>> username=bacula,password=some_pass,rw,bg,vers=3,proto=tcp,hard,intr,rsize=
>>>>>> 32768,wsize=32768,forcedirectio,llock /backup/NAS/
>>>>&g
some_pass,rw,bg,vers=3,proto=tcp,hard,intr,rsize=
>>>>> 32768,wsize=32768,forcedirectio,llock /backup/NAS/
>>>>>
>>>>> On my bacula box, when I mount this share folder I see the users
>>>>> "501","502". This u
On Thu, 15 Jul 2010 15:46:38 +0100
Rory Campbell-Lange wrote:
> We are presently trying to use Amanda to write 7-10TB of data from one
> server to an LTO4 tape library. We are frustrated with problems in
> retrieving files from the second tape onwards. Amanda seems to suffer
> from an intrinsic p
o every file on the NAS.
>>
>
> I assume you are doing the cifs mount as root?
>
> John
>
> __ Información de ESET Smart Security, versión de la base de
> firmas de virus 5281 (20100715) __
>
> ESET Smart Security ha comprobado este mensaje.
>
> http://www.eset.com
> This account it's from the NAS...I created the user "bacula" on NAS. This
> user has access to every file on the NAS.
>
I assume you are doing the cifs mount as root?
John
--
This SF.net email is sponsored by Sprint
Wh
t; /backup/NAS/
>>
>
> You need to change the cifs authentication to an account that has
> access to every file.
>
> John
>
> __ Información de ESET Smart Security, versión de la base de
> firmas de virus 5281 (20100715) __
>
> ESET Smart Securit
orcedirectio,llock /backup/NAS/
>>>>
>>>> On my bacula box, when I mount this share folder I see the users
>>>> "501","502". This users are from NAS
>>>>
>>>>
>>>> If you need more info please don't
Hi,
Sorry to warm up this slightly old discussion, but since I'm suffering
from a similar problem, I just stumpled upon this thread while
searching for a solution.
In my setting, it's a disk-to-disk-to-tape setup, I'm writing the data
from my clients to a RAID-6 storage (15 x 1,5TB storage system
gt;>>
>>> --
>>>
>>>
>>>
>>>Sebastián Moreno
>>>Projects Manager
>>>
>>>IT BUSINESS Corp.
>>>Av. 18 de Julio 1805 5° Piso
>>>( +598-2-4093287 | Mobile: +598-9
rp.
>> Av. 18 de Julio 1805 5° Piso
>> ( +598-2-4093287 | Mobile: +598-95-356960
>>
>> Montevideo |Uruguay.
>>
>>
>>
>>
>>
>>
>>
>>
>> PPlease consider the impact on the environment before printing this
>
We are presently trying to use Amanda to write 7-10TB of data from one
server to an LTO4 tape library. We are frustrated with problems in
retrieving files from the second tape onwards. Amanda seems to suffer
from an intrinsic problem about knowing where to find a file from the
backup set, which is
On 7/15/2010 5:52 AM, manga...@hotmail.com wrote:
> Greetings,
>
> I'm using bacula to complete backup of linux client with different partition.
> In the log of my backup I get this message:
>
> mailserver3-fd JobId 3516: /var is a different filesystem. Will not
> descend
> from / into /var
>
S Corp.
> Av. 18 de Julio 1805 5° Piso
> ( +598-2-4093287 | Mobile: +598-95-356960
>
> Montevideo |Uruguay.
>
>
>
>
>
>
>
>
> PPlease consider the impact on the environment before printing this
> e-mail!! / P Por favor, considere
-356960
Montevideo |Uruguay.
PPlease consider the impact on the environment before printing this e-mail!!
/ P Por favor, considere el impacto sobre el medio ambiente antes de imprimir
este email!!
__ Información de ESET Smart Security, versión de la base de fir
Hello,
I'm running some backups and I'm not quite sure what to make of the
reports:
10-Jul 23:26 dio-sd JobId 438: Alert: smartctl 5.39.1 2010-01-28 r3054
[x86_64-redhat-linux-gnu] (local build)
10-Jul 23:26 dio-sd JobId 438: Alert: Copyright (C) 2002-10 by Bruce
Allen,http://smartmontools
daveb93 wrote:
> My bacula server is multi-homed.
> I am needing port 9102 to answer on all of the IP addresses on the server in
> order to service all of the subnets attached to the machine
>
> I looked at taking care of this on the network routing level but it just
> would not be practical
>
Hello!
When i use the bat to create manually a job i didn´t get the information
for the bootstrap. I have set that in the conf.
Do i miss here something or how can i check if it´s only a error in the
bat?
Greetings
Alex
--
On Tue, Jul 13, 2010 at 08:53:55AM +0700, Ken wrote:
> Hello Alex,
>
> If you read make_catalog_backup.pl the accepted parameters are $1 dbname,
> and $2 dbuser, so I would suggest you try
>
> RunBeforeJob = "/usr/lib64/bacula/make_catalog_backup.pl bacula bacula"
>
> Hope that helps.
> Ken
>
I
I've been using Bacula on a Windows 2008 server for a while now. A couple
things:
- Forget tape drives. With SBS Server, use USB hard disks for backup. Use
2008's native backup (Not NTBackup any more - the new one is MUCH improved once
you get over the completely new concepts).
- Forget BackupE
Greetings,
I'm using bacula to complete backup of linux client with different partition.
In the log of my backup I get this message:
mailserver3-fd JobId 3516: /var is a different filesystem. Will not
descend
from / into /var
googling for help, I found two solutions. Specify /var as a fo
I have a problem with one of a pools.
Jobs are waiting for label volume, but I cannot label it.
Other pools is runnig well and they are on the same disk (sure other
device)
Everything was runnig fine for about 4 years and on the version 5.0.1
too some month, just from some moment it is waitin
38 matches
Mail list logo