Re: [Bacula-users] Slow Performance

2012-05-28 Thread Graham Worley
> I have seen that in the past like 3 times in 10 years (around 30
> thousand completed jobs) but in no way is that is not normal.  I would
> look for an updated version of bacula. 5.26 is the current version
Am building 5.2.6 by hand now as 5.0.0 was the latest in the Scientific 
Linux repository. I'll see how I get on.
> Turn off software compression or run multiple simultaneous jobs from
> different clients.
I believe compression is off. Should running multiple jobs help? I'm 
trying to understand what the cause of the bottleneck is. Is there a way 
of determining other than trial and error with simultaneous jobs whether 
the problem is caused at the file daemon end or the storage daemon?
>
> John
Thanks for your helpful suggestions,


Graham


-- 
Graham Worley BEng MIET
Network Manager
School of Ocean Sciences
College of Natural Sciences
University of Wales BANGOR
Askew Street
Menai Bridge
Anglesey
LL59 5EY

e-mail : g.wor...@bangor.ac.uk
Tel.   : 01248 382694


-- 
Rhif Elusen Gofrestredig / Registered Charity No. 1141565

Gall y neges e-bost hon, ac unrhyw atodiadau a anfonwyd gyda hi,
gynnwys deunydd cyfrinachol ac wedi eu bwriadu i'w defnyddio'n unig
gan y sawl y cawsant eu cyfeirio ato (atynt). Os ydych wedi derbyn y
neges e-bost hon trwy gamgymeriad, rhowch wybod i'r anfonwr ar
unwaith a dilëwch y neges. Os na fwriadwyd anfon y neges atoch chi,
rhaid i chi beidio â defnyddio, cadw neu ddatgelu unrhyw wybodaeth a
gynhwysir ynddi. Mae unrhyw farn neu safbwynt yn eiddo i'r sawl a'i
hanfonodd yn unig  ac nid yw o anghenraid yn cynrychioli barn
Prifysgol Bangor. Nid yw Prifysgol Bangor yn gwarantu
bod y neges e-bost hon neu unrhyw atodiadau yn rhydd rhag firysau neu
100% yn ddiogel. Oni bai fod hyn wedi ei ddatgan yn uniongyrchol yn
nhestun yr e-bost, nid bwriad y neges e-bost hon yw ffurfio contract
rhwymol - mae rhestr o lofnodwyr awdurdodedig ar gael o Swyddfa
Cyllid Prifysgol Bangor.  www.bangor.ac.uk

This email and any attachments may contain confidential material and
is solely for the use of the intended recipient(s).  If you have
received this email in error, please notify the sender immediately
and delete this email.  If you are not the intended recipient(s), you
must not use, retain or disclose any information contained in this
email.  Any views or opinions are solely those of the sender and do
not necessarily represent those of Bangor University.
Bangor University does not guarantee that this email or
any attachments are free from viruses or 100% secure.  Unless
expressly stated in the body of the text of the email, this email is
not intended to form a binding contract - a list of authorised
signatories is available from the Bangor University Finance
Office.  www.bangor.ac.uk

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: SD Bug "User specified spool size reached"

2012-05-28 Thread Tilman Schmidt
Am 28.05.2012 19:21, schrieb John Drescher:
> On Mon, May 28, 2012 at 9:38 AM, Sean Cardus  wrote:

>>> On a Bacula installation with a CentOS 6.2 (64 bit) server
>>> running Bacula 5.2.6 director and SD with an HP LTO3 autoloader
>>> and backing up some twenty-odd machines of various platforms,
>>> all the jobs report "User specified spool size reached"
>>> much too early, currently after 1.45 GB instead of the really
>>> specified size of 50 GB. Example log excerpt:
>> [..]
>>> Scanning the log it looks like the storage daemon decreases its
>>> spool size limit every time a backup job crashes. The last time
>>> it used the full 50 GB was here:
>>
>> I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD 
>> 5.0.3) for quite some time.  However, until I saw your email I didn't 
>> correlate the max spool size reducing with each time a job crashes.  I can 
>> also confirm that whenever a job crashes, the max spool size drops.
>>
>> The only workaround I can see at the moment is restarting the SD...
> 
> I would be interested in seeing why jobs are crashing and possibly
> submit bug reports for that. I have run 30 thousand or so jobs at work
> and I do not see very many job crashes.

That one is easy, and not very interesting actually. The two most
frequent causes are:
a) network outages (we do some backups over WAN VPN connections)
b) clients crashing during backup, either coincidentally or because the
additional load from the backup pushes an already flakey machine over
the brink.
Neither is Bacula's fault, and therefore something that would warrant a
Bacula bug report I think.
(Although I do sometimes wish Bacula would handle clients disappearing
in the middle of a backup more gracefully.)

To sum it up: there can be legitimate reasons for a job not terminating
successfully, but that shouldn't lead to a diminishing maximum spool
size IMHO.

T.




signature.asc
Description: OpenPGP digital signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Bug "User specified spool size reached"

2012-05-28 Thread Tilman Schmidt
Am 28.05.2012 17:44, schrieb Alan Brown:
> On 28/05/12 14:38, Sean Cardus wrote:
> 
>>> Scanning the log it looks like the storage daemon decreases its
>>> spool size limit every time a backup job crashes. The last time
>>> it used the full 50 GB was here:
>>
>> I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD 
>> 5.0.3) for quite some time.  However, until I saw your email I didn't 
>> correlate the max spool size reducing with each time a job crashes.  I can 
>> also confirm that whenever a job crashes, the max spool size drops.
>>
>> The only workaround I can see at the moment is restarting the SD...
> 
> Have you looked in the spool area to find orphan files?

As I already wrote, the spool area contained only two zero-sized files.

T.



signature.asc
Description: OpenPGP digital signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: SD Bug "User specified spool size reached"

2012-05-28 Thread John Drescher
-- Forwarded message --
From: John Drescher 
Date: Mon, May 28, 2012 at 1:20 PM
Subject: Re: [Bacula-users] SD Bug "User specified spool size reached"
To: Sean Cardus 


On Mon, May 28, 2012 at 9:38 AM, Sean Cardus  wrote:
> Hi,
>
>> On a Bacula installation with a CentOS 6.2 (64 bit) server
>> running Bacula 5.2.6 director and SD with an HP LTO3 autoloader
>> and backing up some twenty-odd machines of various platforms,
>> all the jobs report "User specified spool size reached"
>> much too early, currently after 1.45 GB instead of the really
>> specified size of 50 GB. Example log excerpt:
> [..]
>> Scanning the log it looks like the storage daemon decreases its
>> spool size limit every time a backup job crashes. The last time
>> it used the full 50 GB was here:
>
> I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD 
> 5.0.3) for quite some time.  However, until I saw your email I didn't 
> correlate the max spool size reducing with each time a job crashes.  I can 
> also confirm that whenever a job crashes, the max spool size drops.
>
> The only workaround I can see at the moment is restarting the SD...
>

I would be interested in seeing why jobs are crashing and possibly
submit bug reports for that. I have run 30 thousand or so jobs at work
and I do not see very many job crashes.

John


-- 
John M. Drescher

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=No space left on device

2012-05-28 Thread Bryan Harris
How about deleted files held open by other running processes?  I wonder if an 
open file that was deleted is using up some of your disk space?

lsof | grep -i deleted

Bryan

On May 28, 2012, at 12:18 PM, John Drescher wrote:

> On Mon, May 28, 2012 at 1:08 PM, Bryan Harris  wrote:
>> I don't think you can do this.  You have to restore to the FD not the 
>> director.  If you plug the USB disk into the FD, then mount the disk there.  
>> Then I think you'll get what you want.
>> 
>> Or else, setup and run another FD on the same server as the director.  Then 
>> restore to the new FD.
>> 
> 
> The restore location does not have to be on the same machine as the
> client you made a backup on. You can set the location when you run the
> restore from bconsole. I am not sure of what options any of the GUIs
> provide so I can not comment on these.
> 
> John


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=No space left on device

2012-05-28 Thread John Drescher
On Mon, May 28, 2012 at 1:08 PM, Bryan Harris  wrote:
> I don't think you can do this.  You have to restore to the FD not the 
> director.  If you plug the USB disk into the FD, then mount the disk there.  
> Then I think you'll get what you want.
>
> Or else, setup and run another FD on the same server as the director.  Then 
> restore to the new FD.
>

The restore location does not have to be on the same machine as the
client you made a backup on. You can set the location when you run the
restore from bconsole. I am not sure of what options any of the GUIs
provide so I can not comment on these.

John

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=No space left on device

2012-05-28 Thread Bryan Harris
I don't think you can do this.  You have to restore to the FD not the director. 
 If you plug the USB disk into the FD, then mount the disk there.  Then I think 
you'll get what you want.

Or else, setup and run another FD on the same server as the director.  Then 
restore to the new FD.

Bryan

On May 28, 2012, at 11:53 AM, Phil Stracchino wrote:

> On 05/28/2012 12:44 PM, Luis H. Forchesatto wrote:
>> Hi
>> 
>> I'm receiving this message when trying to restore a backup do an USB
>> Disk. Space on disk is not the problem, also the disk inodes are 99%
>> free. The disk contains an ext3 filesystem and out of bconsole i can
>> write/read with no problem.
>> 
>> I'm trying to restore a backup made by bacula to the disk. The director
>> is in the machine where the USB Disk is attached and the file daemon is
>> in another server at the same network (connection between them are
>> fine). Disk is 500gb space.
>> 
>> Any ideas?
> 
> 
> You have of course made sure the restore path is correct?
> 
> 
> 
> -- 
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
>  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
> It's not the years, it's the mileage.
> 
> --
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and 
> threat landscape has changed and how IT managers can respond. Discussions 
> will include endpoint security, mobile security and the latest in malware 
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to restore a tape using a different SD

2012-05-28 Thread Rushdhi Mohamed
hi..

I am using Bacula to back up all the servers in my office. That’s automated
and works fine..

I am using a NAS and an Autoloader.  Nightly backups are write to NAS and
then to Tape (copy jobs).

Now the problem is, I want to restore the backups in a remote server  to
check whether the backups are perfect .

So I configured the remote server as another SD (in a branch) and connect
it with my Bacula-DR at main office, when I try to restore the backups I am
getting this error

28-May 22:09 dir-backup01.hnbassurance.com JobId 6114: Start Restore Job
RestoreFiles.2012-05-28_22.09.20_39

28-May 22:09 dir-backup01.hnbassurance.com JobId 6114: Using Device
"Remote-LTO-4"

28-May 22:09 fd-backup01.hnbassurance.com JobId 6114: Fatal error:
job.c:2031 Bad response to Read Data command. Wanted 3000 OK data

, got 3000 error



28-May 22:09 dir-backup01.hnbassurance.com JobId 6114: Using Device
"NASStore001"

28-May 22:09 fd-backup01.hnbassurance.com JobId 6114: Fatal error: Failed
to authenticate Storage daemon.

28-May 22:09 dir-backup01.hnbassurance.com JobId 6114: Fatal error: Bad
response to Storage command: wanted 2000 OK storage

, got 2902 Bad storage





28-May 22:09 dir-backup01.hnbassurance.com JobId 6114: Error: Bacula
dir-backup01.hnbassurance.com 5.0.3 (04Aug10): 28-May-2012 22:09:29

  Build OS:   x86_64-suse-linux-gnu suse 5.x

  JobId:  6114

  Job:RestoreFiles.2012-05-28_22.09.20_39

  Restore Client: fd-backup01.domain.com

  Start time: 28-May-2012 22:09:22

  End time:   28-May-2012 22:09:29

  Files Expected: 16

  Files Restored: 0

  Bytes Restored: 0

  Rate:   0.0 KB/s

  FD Errors:  0

  FD termination status:

  SD termination status:  Waiting on FD

  Termination:*** Restore Error ***



28-May 22:09 dir-backup01.mydomain.com JobId 6114: Error: Bacula
dir-backup01.hnbassurance.com 5.0.3 (04Aug10): 28-May-2012 22:09:29

  Build OS:   x86_64-suse-linux-gnu suse 5.x

  JobId:  6114

  Job:RestoreFiles.2012-05-28_22.09.20_39

  Restore Client: fd-backup01.mydomain.com

  Start time: 28-May-2012 22:09:22

  End time:   28-May-2012 22:09:29

  Files Expected: 16

  Files Restored: 0

  Bytes Restored: 0

  Rate:   0.0 KB/s

  FD Errors:  1

  FD termination status:

  SD termination status:  Waiting on FD

  Termination:*** Restore Error ***



In a nutshell I copied the backups to tape using  the loader connected to
SD 1 and I want to restore the backups using another  tape loader connected
to SD2 here both SD1 and SD2 are connected to Bacula-DR. Is it possible?

i can just do the restore using bls and bextract but i think if this is
possible i have a GUI to use.. and will be fine.


Expecting a good solution please ,,,Thanks in Advance.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=No space left on device

2012-05-28 Thread Phil Stracchino
On 05/28/2012 12:44 PM, Luis H. Forchesatto wrote:
> Hi
> 
> I'm receiving this message when trying to restore a backup do an USB
> Disk. Space on disk is not the problem, also the disk inodes are 99%
> free. The disk contains an ext3 filesystem and out of bconsole i can
> write/read with no problem.
> 
> I'm trying to restore a backup made by bacula to the disk. The director
> is in the machine where the USB Disk is attached and the file daemon is
> in another server at the same network (connection between them are
> fine). Disk is 500gb space.
> 
> Any ideas?


You have of course made sure the restore path is correct?



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ERR=No space left on device

2012-05-28 Thread Luis H. Forchesatto
Hi

I'm receiving this message when trying to restore a backup do an USB Disk.
Space on disk is not the problem, also the disk inodes are 99% free. The
disk contains an ext3 filesystem and out of bconsole i can write/read with
no problem.

I'm trying to restore a backup made by bacula to the disk. The director is
in the machine where the USB Disk is attached and the file daemon is in
another server at the same network (connection between them are fine). Disk
is 500gb space.

Any ideas?

-- 
Att.*
***
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Bug "User specified spool size reached"

2012-05-28 Thread Alan Brown
On 28/05/12 14:38, Sean Cardus wrote:

>> Scanning the log it looks like the storage daemon decreases its
>> spool size limit every time a backup job crashes. The last time
>> it used the full 50 GB was here:
>
> I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD 
> 5.0.3) for quite some time.  However, until I saw your email I didn't 
> correlate the max spool size reducing with each time a job crashes.  I can 
> also confirm that whenever a job crashes, the max spool size drops.
>
> The only workaround I can see at the moment is restarting the SD...

Have you looked in the spool area to find orphan files?





--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow Performance

2012-05-28 Thread John Drescher
> I've just started using Bacula having installed the RPMs from the Scientific 
> Linux distro. I'm trying to back up several machines to disk file on the 
> Bacula server and am experiencing two odd problems:
>
> i) disk volumes created by Bacula are changing their state to Error with the 
> message that they are a different size to that set in the catalog, is this 
> normal?
>

I have seen that in the past like 3 times in 10 years (around 30
thousand completed jobs) but in no way is that is not normal.  I would
look for an updated version of bacula. 5.26 is the current version

>
> ii) data throughput is really low The machines are all connected via gigabit 
> connections yet I'm seeing less than 6MBytes/s transfer to disk, more often 
> than not it is down to 2 Mbytes/s. The raid array I'm using responds at a 
> speed of about 70Mbytes/s when i test it with a dd of a few gigabyte from 
> /dev/zero. Rsync between the same hosts was very quick, generating nearly 
> 500Mbit/s of network traffic.
>
> Can any offer any suggestions?
>

Turn off software compression or run multiple simultaneous jobs from
different clients.

John

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Bug "User specified spool size reached"

2012-05-28 Thread Sean Cardus
Hi,

> On a Bacula installation with a CentOS 6.2 (64 bit) server
> running Bacula 5.2.6 director and SD with an HP LTO3 autoloader
> and backing up some twenty-odd machines of various platforms,
> all the jobs report "User specified spool size reached"
> much too early, currently after 1.45 GB instead of the really
> specified size of 50 GB. Example log excerpt:
[..]
> Scanning the log it looks like the storage daemon decreases its
> spool size limit every time a backup job crashes. The last time
> it used the full 50 GB was here:

I've been seeing exactly the same behaviour here (CentOS 4.9, Bacula SD 5.0.3) 
for quite some time.  However, until I saw your email I didn't correlate the 
max spool size reducing with each time a job crashes.  I can also confirm that 
whenever a job crashes, the max spool size drops.

The only workaround I can see at the moment is restarting the SD...

Sean
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow Performance

2012-05-28 Thread Guy
Are you doing compression or encryption?  And if so which?  If the files are 
large I found that those can really slow down the transfers.

---Guy
(via iPhone)

On 19 May 2012, at 01:00, Graham Worley  wrote:

> Hello,
> 
> I've just started using Bacula having installed the RPMs from the Scientific 
> Linux distro. I'm trying to back up several machines to disk file on the 
> Bacula server and am experiencing two odd problems:
> 
> i) disk volumes created by Bacula are changing their state to Error with the 
> message that they are a different size to that set in the catalog, is this 
> normal?
> 
> ii) data throughput is really low The machines are all connected via gigabit 
> connections yet I'm seeing less than 6MBytes/s transfer to disk, more often 
> than not it is down to 2 Mbytes/s. The raid array I'm using responds at a 
> speed of about 70Mbytes/s when i test it with a dd of a few gigabyte from 
> /dev/zero. Rsync between the same hosts was very quick, generating nearly 
> 500Mbit/s of network traffic.
> 
> Can any offer any suggestions?
> 
> Thanks,
> Graham
> --
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and 
> threat landscape has changed and how IT managers can respond. Discussions 
> will include endpoint security, mobile security and the latest in malware 
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


smime.p7s
Description: S/MIME cryptographic signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow Performance

2012-05-28 Thread Graham Worley
Hello,

I've just started using Bacula having installed the RPMs from the Scientific 
Linux distro. I'm trying to back up several machines to disk file on the Bacula 
server and am experiencing two odd problems:

i) disk volumes created by Bacula are changing their state to Error with the 
message that they are a different size to that set in the catalog, is this 
normal?

ii) data throughput is really low The machines are all connected via gigabit 
connections yet I'm seeing less than 6MBytes/s transfer to disk, more often 
than not it is down to 2 Mbytes/s. The raid array I'm using responds at a speed 
of about 70Mbytes/s when i test it with a dd of a few gigabyte from /dev/zero. 
Rsync between the same hosts was very quick, generating nearly 500Mbit/s of 
network traffic.

Can any offer any suggestions?

Thanks,
Graham
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same Bacula SD on Multiple Networks (VLANs)

2012-05-28 Thread Rodrigo Renie Braga
2012/5/28 Alan Brown 
>
>
> I have a similar problem. Setting the non-FQDn IPs required in /etc/hosts
> does the trick.
>
> I though about that at first too, but like I said, there are some VLANs
that I have no control of, and also depending on local static configuration
of 300+ servers can start to became quite unmanageable in the future... But
the solution provided by Bryan on the previous email completely solves this
problem and I really recommend doing that.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same Bacula SD on Multiple Networks (VLANs)

2012-05-28 Thread Alan Brown
On 28/05/12 05:28, Rodrigo Renie Braga wrote:

> My ideal solution which Bacula, apparently, DOES NOT SUPPORT: make the
> "Address" option on the Storage resource optional, meaning that if not
> specified, the SD Address that Bacula will send to the FD is the same one
> that the Director used to connect on the FD on the first place. That would
> be just awesome...
>
> For now, what's the best solution for my problem? Any thoughts?

I have a similar problem. Setting the non-FQDn IPs required in 
/etc/hosts does the trick.





--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users