Re: [Bacula-users] Disk setup for best performance on backups

2020-01-24 Thread Jason Voorhees
Sorry for the late reply guys, I've been busy these last days.

I really appreciate all the responses you made to this thread. I'll
read carefully all of them and take into consideration for my
deployment.

Hope you all have a nice weekend!

On Wed, Jan 22, 2020 at 11:55 AM Josh Fisher  wrote:
>
>
> On 1/22/2020 10:52 AM, dmaziuk via Bacula-users wrote:
> > On 1/22/2020 2:19 AM, Radosław Korzeniewski wrote:
> >
> >> Unless you are using BEE GED or other similar functionality you should
> >> never use the SSD in your backup solution as it will be a pure waste of
> >> money.
> >
> > I'm running a bunch of jobs in parallel and spooling them on an ssd.
> > Works pretty well for the money, but you need to work out how to size it.
>
>
> It makes sense to put Bacula's work directory and any spooling to a
> separate disk of some kind. It prevents writes for spooling, logging,
> etc. from interfering with the sequential nature of the volume data writes.
>
> Another thing that makes SSD worth the money is the very fast random
> read/write speeds and IOPS allows running a local DB server for the
> catalog. This is particularly helpful for those of us contending with 1G
> networks, and adding a SSD is far cheaper than upgrading the network to
> 10 G.
>
>
> >
> > Dima
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk setup for best performance on backups

2020-01-20 Thread Jason Voorhees
Hello guys:

I'm planning a Bacula deployment on AWS in the following weeks. I have
some doubts about disk performance for Disk based backups.

Based on the idea that Bacula writes data on big files (i.e. 100 GB
each volume), what technical considerations should I have for the
underlying storage device? These are some of my questions I have
around it:

- Does it matter a lot choosing XFS instead of ext4 as filesystem?
- How can I know the amount of IOPS needed for my local disk?
- What does Bacula need most: high IOPS or throughput (MB/s)?
- Based on the previous question, should I choose SSD over HDD disks?
- Is it worth using RAID1 or RAID10 for improving performance?

I was planning to use HDD disks which offers high throughput (500
MB/s) and up to 500 IOPS per disk (these are "st1" EBS volumes).

By the way, I pretend to use an external DB (Amazon RDS) for my
Catalog, so my Storage daemon wouldn't share the same underlying
storage.

I hope someone can share some ideas about disk performance.

I didn't find enough info about this topic on Internet. Thanks in advance


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups to S3

2020-01-20 Thread Jason Voorhees
You're right, I completely forgot about costs. Definitely, using S3
plugin will be much more cheaper than using Amazon Storage Gateway
VTL.

On my scenario, Cloud (S3) is the only place where I could save my
backups, but I fully understand the comparison you made about your on
premises environment.

Thank you all guys

On Fri, Jan 17, 2020 at 5:27 PM Dimitri Maziuk via Bacula-users
 wrote:
>
> On 1/17/20 1:03 PM, Jason Voorhees wrote:
> ...
> > I'll start playing around with it. I'll let you know if anything does
> > not work as expected.
>
> You made me look: what I can find on amazon is VTL is
> - $125/mo/gateway
> - $0.30/restore
> - $0.09/GB/month in regular or $0.01/GB/month in "glacial" storage.
>
> S3 pricing appears to be
> - $0.023/GB/month in "standard", $0.01 in "infrequent", and $0.004 in
> "glacial" tiers.
>
> However you pay half a penny per put and $0.0004 for get request on
> "standard", and about an order of magnitude more on the other tries.
>
> Please let us know how much you end up paying, too: 4TB NAS HDD is about
> a hundred bucks, one-off, whereas 4TB of VTL storage is $368.64 *every
> month*. In "infrequent" S3 it's "only" forty-one bucks per month (so on
> the 3rd month you're better off with the hard drive), but the
> interesting question is how much you rake up in data transfer fees.
>
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups to S3

2020-01-17 Thread Jason Voorhees
Thanks for your email, Heitor

> Why would you add an emulation/extra layer between Bacula and the final 
> destination?
What do you mean as extra layer? Are you referring to the integration
between Bacula and Amazon Storage VTL?

> Bacula S3 Plugin is really good. It provides nice options such as upload 
> behavior, cache and Bacula volume fine parts to make the transference more 
> granular.
> If your solution grows and you need heavy support on that, you can always 
> upgrade to the Enterprise Bacula in the future, which is compatible.
>
That's exactly what I wanted to know... if this Cloud/S3 plugin is not
just good, but it's also reliable enough for using it on production
servers

I'll start playing around with it. I'll let you know if anything does
not work as expected.

Thank you guys!


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backups to S3

2020-01-16 Thread Jason Voorhees
Hello guys, I hope you're doing well

I have a question: So far, which is the best combination for storing
backups on S3 using Bacula? Is it through the new Cloud plugin? or is
it best to use the Amazon VTL integration with Bacula? Which of these
is the more reliable for production environments? Which one would you
use?

My scenario seems to be simple enough: I need to backup data from
several EC2 instances using Bacula and Amazon S3, both (EC2 and S3) in
the same region. So bandwidth and data transfer pricing wouldn't be an
issue for me.

I hope someone can share his experiences and/or recommendations.

Thanks in advance for your help, bats!


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula largest installations ever

2017-07-10 Thread Jason Voorhees
Hello bats:

I was wondering if you can share any of your biggest Bacula
deployments you ever made. I mean anything like "I use Bacula to
backup thousands of desktops/servers", or "I have a backup
infrastructure with Bacula for more than 30TB of data being backed
up".

It's just that I want to know how robust and scalable Bacula might be
compared to other competitors such as TSM, HP Dataprotector, Backup
Exec, among others.

This is because one of my customer want to backup more than 2,000
desktops and a backup software is under evaluation.

Thanks in advance for your comments, if any.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula in the cloud

2016-10-18 Thread Jason Voorhees
Thank you all for your responses.

I'll take a look at Bacula systems' whitepaper to see what they're
talking about. Meanwhile I'll explore some of the alternatives
discussed on this thread like copying files with scripts and making a
replica on SpiderOak or anything similar.

I hope we can have an interesting solution for this "problem" in the
near future.

Thanks again for your time :)

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula in the cloud

2016-10-17 Thread Jason Voorhees
Hello guys:

Based on your experience, what alternative do we have for backing up
information to the cloud preferably using Bacula?

I've been reading some posts about similar topics. Bandwidth always
seem to be a problem because it isn't to big (Gigs per second) or
there's to much information (several terabytes) and Bacula can't run
jobs for so long without modifying source code and recompiling.

I've been thinking something alternatives like these:

1. Backup to local disk and configure Copy jobs to make a copy to
Amazon S3. Local backups can run always fast but Copy jobs might be
delayed ... without issues?

2. Configure Amazon Storage Gateway as VTL so Bacula can backup
directly to Amazon using virtual tape devices through iSCSI. What do
you think about this?

3. For a single fileserver to be backed up (let's say a Samba server),
I could create a replica in the cloud (i.e. Amazon EC2) which can be
constantly synchronized (via rsync) and run Bacula locally in such EC2
instance.

What other ideas have you thought? Maybe a combination of other open
source tools that can be combined with Bacula? or maybe a different
open source solution that replaces Bacula to save backups in the
cloud?

I'd appreciate some ideas, pros and/or cons to be discussed.

Thanks in advance for your time bats!

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best backup strategy on short disk capacity

2016-06-22 Thread Jason Voorhees
On Tue, Jun 21, 2016 at 7:09 AM, Kern Sibbald  wrote:
> Hello,
>
> You are sort of in an impossible situation.  With the large amount of data
> you have and the small backup space, it will be hard to make Bacula work as
> it should.
>
> My very strong conviction is that any single backup job that last more than
> 10 hours is too long.  If it takes 7 days to backup one machine, Bacula will
> not work very well.
>
> At a minimum you should split your Windows backup into four separate jobs
> each one backing up a single drive.  Then after completing a full backup of
> each drive, you can stagger backups to do a full of the machine over 4 days
> (one for each drive).  That will reduce the overall backup load.  However,
> if a backup of a single drive takes 2 days, in my opinion, it will still be
> too long.
>
> Another, though slightly more complicated way to handle it is to separate
> the Fulls as noted above, but when the next Full for a drive comes do, do a
> VirtualFull instead of a full.  All the other backups for that drive can
> then be  Incrementals.  At least one of the Incrementals should be with
> Accurate = yes.  Then as a security measure do another real Full (per drive)
> once a year.
>
> About how to handle retention periods: that is too complicated for me to
> explain here.  The simplest solution is to get much more disk for use by
> Bacula, or you can get a professional support to get help for retention
> periods, or ask for help on the list. Reading the "Automated Disk Backup"
> chapter may help or "Automatic Volume Recycling" and "Basic Volume
> Management".  It is all there, but it is much easier to understand with
> help.
>

Thanks again Kern, I'll try to get the best of your recommendation to
decide how to backup this specific client.

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Best backup strategy on short disk capacity

2016-06-20 Thread Jason Voorhees
Hello guys:

I'm running Bacula 7.4.0 to save backup of my Windows fileserver which
has 4 drives (D:, F:, G:, H:) with 7.1 TB of total capacity.

I've ran a full backup during 7 days which occupied 5.8 TB (~20%
compression rate) in a local filesystem with local SATA disks. This
local filesystem is at 80% usage so I don't have enough free space to
run a 2nd full backup. I was planning to run Full + Incremental +
Differential backups like this:

- Full -> 1st sunday
- Differential -> 2nd-5th sunday
- Incremental -> monday-saturday

If my full backup takes 7 days to complete, how should I schedule my
jobs? I created 3 Storage resources which each point to 3 different
directories so I can run a Full+Incremental or Full+Differential jobs
concurrently.

Also, how long should my retention periods should be so I can run a
new full backup without filling my disk up to 100%? I thought my
retention period could be something like 30 days or less (maybe 21
days)... but  if my last full backup is overwritten by the new
full backup I guess I won't be able to have a valid copy during the
time my new full backup is running (7 days approximately). Am I right?

I hope you can understand how confused I am here.

P.D. I only use bacula to make backups to this Windows fileserver and
the Bacula server, no more clients are backed up.

Thanks in advance for your help.

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://sdm.link/zohomanageengine
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mail notifications delayed

2016-06-20 Thread Jason Voorhees
Hi:

> The issue is whether the directive is an "operatiorcommand" or a
> "mailcommand".  The transport mechanism does not matter (i.e. bsmtp or mutt,
> or whatever).  A mailcommand is sent at the end of a job and will include
> all messages generated during the job.  The operatorcommand is sent when the
> *event* occurs, so the messages appear immediately and are not batched to
> the end of the job.

oh that makes a big difference. I thought "operatorcommand" was used
exclusively only to notify operators on "mount" events but I wasn't
aware when those messages were sent.

Thanks a lot Kern! :)

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://sdm.link/zohomanageengine
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mail notifications delayed

2016-06-17 Thread Jason Voorhees
What seems

On Thu, Jun 16, 2016 at 3:35 AM, Kern Sibbald  wrote:
> Hello,
>
> There are two things you can do:
>
> 1. Look at the default bacula-dir.conf Messages resource, which avoids the
> problem you are having.  You are missing (or probably have deleted) one
> important directive.
>
> 2. Read the document section on the Messages resource, and in particular the
> "operatorcommand".
>
> Best regards,
> Kern
>

Thanks Kern. What seems strange to me is that the same Messages
resource configuration is working well in a couple of different Bacula
servers (different customers) and I get such kind of messages
inmediately (never delayed).

I've modified the default Messages resource to disable the bsmtp
command in favor of my native mail command which it's currently
routing mail through my postfix service without any issues. Also, the
"all" rule already includes the "mount" operation so messages should
have arrived without delays. I can't understand what went wrong just
that day.

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://sdm.link/zohomanageengine
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Mail notifications delayed

2016-06-15 Thread Jason Voorhees
Hello guys:

I'm running Bacula  7.4.0 with a Messages resource like this:

Messages {
  Name = Standard
  mail = myem...@gmail.com = all, !info, !skipped, !restored, !saved
  console = all, !skipped, !saved
  append = "/var/log/bacula/bacula.log" = all, !skipped
  catalog = all
}

I've just got today a message with repeated lines like these:

10-Jun 07:06 storage-sd JobId 188: Job
storage_job-dataconf.2016-06-05_00.00.00_30 is waiting. Cannot find
any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage:  "DevFile1" (/var/bacula/file1)
Pool: Full
Media type:   DiskFile1


This was caused because I didn't plan correctly the retention time and
max number of volumes on my "Full" pool. That's pretty fine.

But what concerns  me is that a Full backup was run on 1st Sunday this
month (June 5), but the message mentions "10-Jun" in the date colum.

So what I see is that this email should have arrived inmediately on
June 5 when the Full backup was out of volumes but the message arrived
just today. In my Bacula logs I can see that the same error appears
from June 5, so I have no idea why Bacula didn't send the message on
the right time (10 days ago!!!).

I've checked my postfix logs and there's no trace of emails sent to
myem...@gmail.com on June 5 or June 10. There are just logs of emails
sent to myem...@gmail.com today at 2:00 pm.

Does anybody have any idea why Bacula has delayed the notification for so long?

I'd appreciate some help. Thanks in advance guys.

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://pubads.g.doubleclick.net/gampad/clk?id=1444514421=/41014381
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Instalación cliente Bacula en Centos 6.7

2016-05-09 Thread Jason Voorhees
Hola Enrique:

La lista es de idioma inglés, dudo que consultando en español te respondan.

Sobre tu consulta, si no tienes Internet, ¿cómo obtendrás el RPM? No
recuerdo si Bacula viene en los DVDs de CentOS 6 pero al menos hoy lo
pude instalar usando YUM desde Internet sin usar ningún repositorio
adicional como EPEL (por ejemplo), así que podría suponer que sí viene
en los DVDs.

Sin embargo, la versión es muy antigua (la 5.0.0). ¿No te hace mejor
obtener el tarball de bacula 7.4.0 y compilarlo instalando las
dependencias RPM desde DVD?


Please try to translate your email from Spanish to English so you can
get some help from other english speakers guys.



2016-05-03 15:17 GMT-05:00 ENRIQUE1742 . :
> Buenas tardes,
>
> Por favor, alguien me podría ayudar en como instalar el cliente bacula en un
> servidor Centos a través de paquete RPM, debido a que tengo un Centos que no
> tiene acceso a internet.
>
> Gracias.
>
> Atentamente,
> José Enrique
>
>
> --
> Find and fix application performance issues faster with Applications Manager
> Applications Manager provides deep performance insights into multiple tiers
> of
> your business applications. It resolves application problems quickly and
> reduces your MTTR. Get your free trial!
> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


2016-05-03 15:17 GMT-05:00 ENRIQUE1742 . :
> Buenas tardes,
>
> Por favor, alguien me podría ayudar en como instalar el cliente bacula en un
> servidor Centos a través de paquete RPM, debido a que tengo un Centos que no
> tiene acceso a internet.
>
> Gracias.
>
> Atentamente,
> José Enrique
>
>
> --
> Find and fix application performance issues faster with Applications Manager
> Applications Manager provides deep performance insights into multiple tiers
> of
> your business applications. It resolves application problems quickly and
> reduces your MTTR. Get your free trial!
> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Are my Bacula backups fast enough?

2016-04-22 Thread Jason Voorhees
I forgot to say that Bacula client on my Zimbra server (which is
currently being backed up) is using most of the CPU. According to top
command, bacula-fd is reporting between 10%, 30% or even 80% of CPU
usage. Meanwhile, swap usage is currently low (~ 3%)

On Fri, Apr 22, 2016 at 9:57 AM, Jason Voorhees <jvoorhe...@gmail.com> wrote:
> Hello guys:
>
> I'd like you to share some thoughs about my scenario and performance.
> Here's what I have:
>
> Server (Director & Storage)
> 
> System: KVM guest running on Proxmox 4.1 host
> Network: virtio
> Storage: 5x200 GB Virtual disks (qcow2 format), used as physical volumes
> Filesystem: ext4 'by default' on LVM
> Memory: 4 GB
> CPU: 1 core on guest, 8 cores on host (Intel Core i7 3.4 GHz)
> Bacula: 7.4.0 built from source
> OS: CentOS 7 x86_64
> Roles: Bacula with MySQL (used only for backups, nothing else)
>
>
>
> - iperf tests between my bacula server and one of my clients (which is
> also a KVM guest sharing the same physical host with virtio network)
> gave me near 9.78 Gbps rate which seems awesome.
> - I'm using disk backups without Attribute Spooling enabled, yet.
> - GZIP compression and SHA1 signatures are also enabled for all my jobs.
> - MySQL hasn't been tuned at all, it came by default.
>
>
> Yesterday, I ran a Full backup for a Samba fileserver with 280 GB of
> data. Such backup took about 6 hours, having near 15% compression rate
> and near 240 GB written by SD and FD daemons. This Samba server have
> Bacula Client 2.4.4.
>
> Today I'm running a Full backup for a Zimbra server with near 450 GB
> of data to be backed up. I'm running iptraf-ng right now and I can see
> variable rate transfer through my network interface from 10 Mbps up to
> 80 Mbps with an average of 50 Mbps.
>
> I'd like to hear some thoughts from you because I have some questions:
>
> - Is it normal to have a rate of 50 Mbps rate while running a backup?
> - How can I know if MySQL is a bottleneck?
> - Should I disable compression and signatures to improve speed of
> backups? Is it worth it?
> - What if I enable attribute spooling? I've read posts in which some
> people noticed an improvement on data transfer but the despooling
> process of attributes it took too long.
> - Is my backup speed good enough? Should I worry about it? Can they be
> even faster?
>
> Thanks in advance for reading and helping :)
>
> Have a nice day

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Are my Bacula backups fast enough?

2016-04-22 Thread Jason Voorhees
Hello guys:

I'd like you to share some thoughs about my scenario and performance.
Here's what I have:

Server (Director & Storage)

System: KVM guest running on Proxmox 4.1 host
Network: virtio
Storage: 5x200 GB Virtual disks (qcow2 format), used as physical volumes
Filesystem: ext4 'by default' on LVM
Memory: 4 GB
CPU: 1 core on guest, 8 cores on host (Intel Core i7 3.4 GHz)
Bacula: 7.4.0 built from source
OS: CentOS 7 x86_64
Roles: Bacula with MySQL (used only for backups, nothing else)



- iperf tests between my bacula server and one of my clients (which is
also a KVM guest sharing the same physical host with virtio network)
gave me near 9.78 Gbps rate which seems awesome.
- I'm using disk backups without Attribute Spooling enabled, yet.
- GZIP compression and SHA1 signatures are also enabled for all my jobs.
- MySQL hasn't been tuned at all, it came by default.


Yesterday, I ran a Full backup for a Samba fileserver with 280 GB of
data. Such backup took about 6 hours, having near 15% compression rate
and near 240 GB written by SD and FD daemons. This Samba server have
Bacula Client 2.4.4.

Today I'm running a Full backup for a Zimbra server with near 450 GB
of data to be backed up. I'm running iptraf-ng right now and I can see
variable rate transfer through my network interface from 10 Mbps up to
80 Mbps with an average of 50 Mbps.

I'd like to hear some thoughts from you because I have some questions:

- Is it normal to have a rate of 50 Mbps rate while running a backup?
- How can I know if MySQL is a bottleneck?
- Should I disable compression and signatures to improve speed of
backups? Is it worth it?
- What if I enable attribute spooling? I've read posts in which some
people noticed an improvement on data transfer but the despooling
process of attributes it took too long.
- Is my backup speed good enough? Should I worry about it? Can they be
even faster?

Thanks in advance for reading and helping :)

Have a nice day

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to build tray-monitor

2016-04-19 Thread Jason Voorhees
Thank you Kern and Eric, now I was able to build tray-monitor using
qmake and make commands

On Tue, Apr 19, 2016 at 7:30 AM, Eric Bollengier
<eric.bolleng...@baculasystems.com> wrote:
> Hello Jason,
>
> The Makefile is probably generated with the "qmake" command, so, try
>
> qmake
>
> make
>
> in the qt-console/tray-monitor directory.
>
> Hope it helps
>
> Best Regards,
> Eric
>
> On 04/19/2016 02:00 PM, Jason Voorhees wrote:
>> Thank you Kern.
>>
>> Yes, I have tried that but there's no Makefile in tray-monitor
>> directory so I can't run make under this path :( I'd like to build it
>> manually using gcc + arguments but I have no idea how to do it.
>>
>> On Mon, Apr 18, 2016 at 4:04 AM, Kern Sibbald <k...@sibbald.com> wrote:
>>> Hello Jason,
>>>
>>> cd 
>>> ./configure 
>>> make[or "make -j9" goes faster]
>>>
>>> [I assume you have done the above, but if not do it]
>>>
>>> cd src/qt-console/tray-monitor
>>> make
>>>
>>> If I am not mistaken the binary will be in bacula-tray-monitor. You must
>>> then create a configuration file for it that is valid (tray-monitor.conf)
>>> and execute it.
>>>
>>> I have never tried it, so I am not sure it will work.  Feedback would be
>>> appreciated.
>>>
>>> Best regards,
>>> Kern
>>>
>>>
>>> On 04/16/2016 01:06 AM, Jason Voorhees wrote:
>>>>
>>>> Hello guys:
>>>>
>>>> I've built Bacula 7.4.0 but I have no idea how to build the
>>>> tray-monitor application. I can't find any arguments in ./configure
>>>> --help to achieve it even when I found a tray-monitor directory in the
>>>> contents of the tarball.
>>>>
>>>> Hope someone can give me some ideas.
>>>>
>>>> Thanks in advance.
>>>>
>>>>
>
> --
> Find and fix application performance issues faster with Applications Manager
> Applications Manager provides deep performance insights into multiple tiers of
> your business applications. It resolves application problems quickly and
> reduces your MTTR. Get your free trial!
> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to build tray-monitor

2016-04-19 Thread Jason Voorhees
Thank you Kern.

Yes, I have tried that but there's no Makefile in tray-monitor
directory so I can't run make under this path :( I'd like to build it
manually using gcc + arguments but I have no idea how to do it.

On Mon, Apr 18, 2016 at 4:04 AM, Kern Sibbald <k...@sibbald.com> wrote:
> Hello Jason,
>
> cd 
> ./configure 
> make[or "make -j9" goes faster]
>
> [I assume you have done the above, but if not do it]
>
> cd src/qt-console/tray-monitor
> make
>
> If I am not mistaken the binary will be in bacula-tray-monitor. You must
> then create a configuration file for it that is valid (tray-monitor.conf)
> and execute it.
>
> I have never tried it, so I am not sure it will work.  Feedback would be
> appreciated.
>
> Best regards,
> Kern
>
>
> On 04/16/2016 01:06 AM, Jason Voorhees wrote:
>>
>> Hello guys:
>>
>> I've built Bacula 7.4.0 but I have no idea how to build the
>> tray-monitor application. I can't find any arguments in ./configure
>> --help to achieve it even when I found a tray-monitor directory in the
>> contents of the tarball.
>>
>> Hope someone can give me some ideas.
>>
>> Thanks in advance.
>>
>>
>> --
>> Find and fix application performance issues faster with Applications
>> Manager
>> Applications Manager provides deep performance insights into multiple
>> tiers of
>> your business applications. It resolves application problems quickly and
>> reduces your MTTR. Get your free trial!
>> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to build tray-monitor

2016-04-15 Thread Jason Voorhees
Hello guys:

I've built Bacula 7.4.0 but I have no idea how to build the
tray-monitor application. I can't find any arguments in ./configure
--help to achieve it even when I found a tray-monitor directory in the
contents of the tarball.

Hope someone can give me some ideas.

Thanks in advance.

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Difference between Full and Used status

2016-03-28 Thread Jason Voorhees
Cool, thanks again Kern!

I believe Bacula documentation needs to be reviewed and updated 'cause
there are some typos, outdated and wrong info. I would be happy to help
with this once I finish clarifying some basic concepts about Bacula that I
didn't understand well in the past and caused me some headaches
(filesystems getting full, failed backups, many failed jobs, volumes in
Error status, jobs being upgraded to Full, data not being able to be
restored, paing and suffering, etc...)

On Mon, Mar 28, 2016 at 7:23 PM, Kern Sibbald <k...@sibbald.com> wrote:

> Hello Jason,
>
> Yes, when a Volume reaches the "configured" maximum size, it should be
> marked "Full".
>
> Best regards,
> Kern
>
> On 03/29/2016 10:11 AM, Jason Voorhees wrote:
>
>> Thank you Kern, Phil.
>>
>> It's pretty clear to me what a Used status means: when a volume reaches
>> its maximum duration time (i.e. Use Volume Once, Volume Use Duration).
>> But it wasn't clear the difference with Full status.
>>
>> If using tapes I could guess that a volume may have a Full status when
>> the physical media is actually full (i.e. 1.5 TB for a LTO-5 tape), but
>> what about File volumes? If I use "Maximum Volume Bytes = 10G", will my
>> volumes get the Full status when they reach 10 GB of data?
>>
>>
>>
>>
>>
>>
>> On Mon, Mar 28, 2016 at 7:03 PM, Kern Sibbald <k...@sibbald.com
>> <mailto:k...@sibbald.com>> wrote:
>>
>> The two statuses have the same effect, but Full is only marked when
>> the Volume is full.  Used is marked typically when the use time
>> period expires or some similar feature keeps Bacula from writing on
>> the Volume.
>>
>> Best regards,
>> Kern
>>
>>
>> On 03/29/2016 08:41 AM, Jason Voorhees wrote:
>>
>> Hello guys:
>>
>> I feel like a newbie in Bacula so I'm trying to understand some
>> concepts to make my own documents for clarification.
>>
>> I have a question: what's the difference between a Full and Used
>> status?
>>
>> Thank you all in advance.
>>
>>
>> --
>> Transform Data into Opportunity.
>> Accelerate data analysis in your applications with
>> Intel Data Analytics Acceleration Library.
>> Click to learn more.
>> http://pubads.g.doubleclick.net/gampad/clk?id=278785471=/4140
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> <mailto:Bacula-users@lists.sourceforge.net>
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>>
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785471=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Difference between Full and Used status

2016-03-28 Thread Jason Voorhees
Thank you Kern, Phil.

It's pretty clear to me what a Used status means: when a volume reaches its
maximum duration time (i.e. Use Volume Once, Volume Use Duration). But it
wasn't clear the difference with Full status.

If using tapes I could guess that a volume may have a Full status when the
physical media is actually full (i.e. 1.5 TB for a LTO-5 tape), but what
about File volumes? If I use "Maximum Volume Bytes = 10G", will my volumes
get the Full status when they reach 10 GB of data?






On Mon, Mar 28, 2016 at 7:03 PM, Kern Sibbald <k...@sibbald.com> wrote:

> The two statuses have the same effect, but Full is only marked when the
> Volume is full.  Used is marked typically when the use time period expires
> or some similar feature keeps Bacula from writing on the Volume.
>
> Best regards,
> Kern
>
>
> On 03/29/2016 08:41 AM, Jason Voorhees wrote:
>
>> Hello guys:
>>
>> I feel like a newbie in Bacula so I'm trying to understand some
>> concepts to make my own documents for clarification.
>>
>> I have a question: what's the difference between a Full and Used status?
>>
>> Thank you all in advance.
>>
>>
>> --
>> Transform Data into Opportunity.
>> Accelerate data analysis in your applications with
>> Intel Data Analytics Acceleration Library.
>> Click to learn more.
>> http://pubads.g.doubleclick.net/gampad/clk?id=278785471=/4140
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785471=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Difference between Full and Used status

2016-03-28 Thread Jason Voorhees
Hello guys:

I feel like a newbie in Bacula so I'm trying to understand some
concepts to make my own documents for clarification.

I have a question: what's the difference between a Full and Used status?

Thank you all in advance.

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785471=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] vchanger not doing "label barcodes" correctly

2016-03-13 Thread Jason Voorhees
Nevermind, I've already solved this. It was due to permissions issue
on /etc/bacula/bconsole.conf not being readable by "tape" group.

Thanks anyway guys :)

On Sun, Mar 13, 2016 at 10:11 PM, Jason Voorhees <jvoorhe...@gmail.com> wrote:
> Hello Guys:
>
> This is my first time trying to configure vchanger with Bacula 5.2.13.
>
> I've configured Bacula and vchanger according to the official vchanger
> 1.0.1 documentation included in source code (vchangerHowto.html).
>
> As per vchanger howto I'm supposed to initialize my magazine by
> creating a few volumes by running:
>
> # vchanger /etc/vchanger/vchanger-1.conf createvols 0 3
>
> While it's true that empty volumes are created under the mount point
> of my USB disk, they don't get labeled on Bacula. However if I run
> "label barcodes" manually from bconsole I'm able to label them without
> issues.
>
> The only thing I can "see" is this strange message in vchanger's log file:
>
> mar 13 21:58:01: bconsole: exited with rc=1
> mar 13 21:58:01: WARNING! 'update slots' needed in bconsole
> mar 13 21:58:01: bconsole: exited with rc=1
> mar 13 21:58:01: WARNING! 'label barcodes' needed in bconsole
>
> It seems that after vchanger commands creates the volumes it is unable
> to label them through bconsole due to an unknown error. I tried to use
> "bconsole = /usr/sbin/bconsole -d 200" in vchanger-1.conf but I can't
> get any extra info or verbose messages about the error.
>
> Here's some of my most important configuration:
>
> # /etc/vchanger/vchanger-1.conf
> mar 13 21:58:01: bconsole: exited with rc=1
> mar 13 21:58:01: WARNING! 'update slots' needed in bconsole
> mar 13 21:58:01: bconsole: exited with rc=1
> mar 13 21:58:01: WARNING! 'label barcodes' needed in bconsole
>
>
> # /etc/bacula/bacula-sd.conf
> Autochanger {
> Name = usb-vchanger-1
> Device = usb-vchanger-1-drive-0, usb-vchanger-1-drive-1
> Changer Command = "/usr/local/bin/vchanger %c %o %S %a %d"
> Changer Device = "/etc/vchanger/vchanger-1.conf"
> }
>
> Device {
> Name = usb-vchanger-1-drive-0
> Drive Index = 0
> Autochanger = yes
> Device Type = File
> Media Type = File
> Removable Media = no
> Random Access = yes
> Maximum Concurrent Jobs = 1
> Archive Device = /var/spool/vchanger/vchanger-1/0
> }
>
> Device {
> Name = usb-vchanger-1-drive-1
> Drive Index = 1
> Autochanger = yes
> Device Type = File
> Media Type = File
> Removable Media = no
> Random Access = yes
> Maximum Concurrent Jobs = 1
> Archive Device = /var/spool/vchanger/vchanger-1/1
> }
>
> # /etc/bacula/bacula-dir.conf
> Storage {
>   Name = vchanger-1
>   Address = 192.168.1.220
>   SDPort = 9103
>   Password = 96ba74caf2a7379fd81db4b6e7b82985bafc0aaa
>   Device = usb-vchanger-1
>   Media Type = File
>   Autochanger = yes
>   Maximum Concurrent Jobs = 20
> }
>
> Hope someone can help me or point me to some place where I can find a
> possible solution.
>
> Thanks in advance.

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] vchanger not doing "label barcodes" correctly

2016-03-13 Thread Jason Voorhees
Hello Guys:

This is my first time trying to configure vchanger with Bacula 5.2.13.

I've configured Bacula and vchanger according to the official vchanger
1.0.1 documentation included in source code (vchangerHowto.html).

As per vchanger howto I'm supposed to initialize my magazine by
creating a few volumes by running:

# vchanger /etc/vchanger/vchanger-1.conf createvols 0 3

While it's true that empty volumes are created under the mount point
of my USB disk, they don't get labeled on Bacula. However if I run
"label barcodes" manually from bconsole I'm able to label them without
issues.

The only thing I can "see" is this strange message in vchanger's log file:

mar 13 21:58:01: bconsole: exited with rc=1
mar 13 21:58:01: WARNING! 'update slots' needed in bconsole
mar 13 21:58:01: bconsole: exited with rc=1
mar 13 21:58:01: WARNING! 'label barcodes' needed in bconsole

It seems that after vchanger commands creates the volumes it is unable
to label them through bconsole due to an unknown error. I tried to use
"bconsole = /usr/sbin/bconsole -d 200" in vchanger-1.conf but I can't
get any extra info or verbose messages about the error.

Here's some of my most important configuration:

# /etc/vchanger/vchanger-1.conf
mar 13 21:58:01: bconsole: exited with rc=1
mar 13 21:58:01: WARNING! 'update slots' needed in bconsole
mar 13 21:58:01: bconsole: exited with rc=1
mar 13 21:58:01: WARNING! 'label barcodes' needed in bconsole


# /etc/bacula/bacula-sd.conf
Autochanger {
Name = usb-vchanger-1
Device = usb-vchanger-1-drive-0, usb-vchanger-1-drive-1
Changer Command = "/usr/local/bin/vchanger %c %o %S %a %d"
Changer Device = "/etc/vchanger/vchanger-1.conf"
}

Device {
Name = usb-vchanger-1-drive-0
Drive Index = 0
Autochanger = yes
Device Type = File
Media Type = File
Removable Media = no
Random Access = yes
Maximum Concurrent Jobs = 1
Archive Device = /var/spool/vchanger/vchanger-1/0
}

Device {
Name = usb-vchanger-1-drive-1
Drive Index = 1
Autochanger = yes
Device Type = File
Media Type = File
Removable Media = no
Random Access = yes
Maximum Concurrent Jobs = 1
Archive Device = /var/spool/vchanger/vchanger-1/1
}

# /etc/bacula/bacula-dir.conf
Storage {
  Name = vchanger-1
  Address = 192.168.1.220
  SDPort = 9103
  Password = 96ba74caf2a7379fd81db4b6e7b82985bafc0aaa
  Device = usb-vchanger-1
  Media Type = File
  Autochanger = yes
  Maximum Concurrent Jobs = 20
}

Hope someone can help me or point me to some place where I can find a
possible solution.

Thanks in advance.

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] systemd file for bacula-dir

2016-02-22 Thread Jason Voorhees
Hello guys, hope you're doing good:

I'd recently installed bacula 7.4.0 on CentOS 7 x86_64. I don't have
much experience with systemd files but I'm learning through this
process.

I didn't find any bacula*.service files under /usr/lib/systemd/system
directory so I created these ones:

-
bacula-fd.service (it works fine)
-
[Unit]
Description=Bacula File Daemon service
Requires=network.target
After=network.target
RequiresMountsFor=/var/bacula/working /etc/bacula /usr/sbin /var/run/bacula

# from http://www.freedesktop.org/software/systemd/man/systemd.service.html
[Service]
Type=forking
User=root
Group=root
ExecStart=/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf
PIDFile=/var/run/bacula/bacula-fd.9102.pid
StandardError=syslog

[Install]
WantedBy=multi-user.target

--
bacula-sd.service (it works fine)
--
[Unit]
Description=Bacula Storage Daemon service
Requires=network.target
After=network.target
RequiresMountsFor=/var/bacula/working /etc/bacula /usr/sbin /var/run/bacula

# from http://www.freedesktop.org/software/systemd/man/systemd.service.html
[Service]
Type=forking
User=bacula
Group=bacula
ExecStart=/usr/sbin/bacula-sd -c /etc/bacula/bacula-sd.conf
PIDFile=/var/run/bacula/bacula-sd.9103.pid
StandardError=syslog

[Install]
WantedBy=multi-user.target


---
bacula-dir.service (can't make it work)
---
[Unit]
Description=Bacula Director Daemon service
Requires=network.target
After=network.target
RequiresMountsFor=/var/bacula/working /etc/bacula /usr/sbin /var/run/bacula

# From http://www.freedesktop.org/software/systemd/man/systemd.service.html
[Service]
Type=forking
#User=bacula
#Group=bacula
PermissionsStartOnly=yes
ExecStartPre=-/bin/mkdir /var/run/bacula
ExecStartPre=-/bin/chown bacula:bacula /var/run/bacula
ExecStart=/usr/sbin/bacula-dir -u bacula -g bacula -c
/etc/bacula/bacula-dir.conf
PIDFile=/var/run/bacula/bacula-dir.9910.pid
ExecReload=/bin/kill -HUP $MAINPID
StandardError=syslog

[Install]
WantedBy=multi-user.target

I first tried it with "User" and "Group" directives enabled using
"bacula" user, then I modified to run it as root but using "-u" and
"-g" options as shown above.

The problem I have is that bacula-dir seems to start fine for some
seconds by running:

# systemctl start bacula-dir

... which keeps my prompt waiting for a response, but after some time
the command fails and this appear on my logs:

Feb 21 22:08:13 storage systemd: bacula-dir.service start operation
timed out. Terminating.

Am I missing anything?

Thank you

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to know which volumes will be needed for a restore?

2011-05-16 Thread Jason Voorhees
Hi:

Is there a way to know which volumes (tapes) will be required by
Bacula before running a restore job? I hope someone can give me a tip.

Thanks

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to know which volumes will be needed for a restore?

2011-05-16 Thread Jason Voorhees
Hi:

2011/5/16 Kleber Leal kleber.l...@gmail.com:
 The volumes will be showed when you execute restore command.

 Kleber


Yes, you're right. I don't use bconsole to restore jobs but Bat
(GUI). I didn't note that in the Console section I can see which
volumes will be used before clicking OK to begin the restore job.

Thanks, bye.

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bat through SSH X11 forwarding works parcially fine/bad

2011-05-09 Thread Jason Voorhees
It was a mistake of mine. The mentioned windows always were working
fine but were not visible because of a almost invisible bar.

Thanks anyway.

On Fri, May 6, 2011 at 12:52 PM, Jason Voorhees jvoorhe...@gmail.com wrote:
 Hi people:

 I'm running Bacula 5.0.3 (compiled from source) on CentOS 5.6 x86_64.
 I made a basic installation of CentOS without X Window, nor
 GNOME/KDE/etc GUI, just pure console. I installed qt43, qt43-devel
 (required to compile Bat) and xorg-x11-xauth (to be able to run X apps
 trough SSH X11 forwarding).

 My Bacula installation is working perfectly, but when I run bat
 through a ssh x11 forwarding session from my Ubuntu desktop then Bat
 seems to work fine, it loads correctly. I can use without problems the
 Console, Clients, FileSets, Jobs, Pools, Media and
 Storage sections, but Jobs Run, Version Browser and Director
 Status don't seem to work. When I click on them the window just seem
 to hung (similar to suspend a task from shell with ^Z).

 I experienced this on a similar  system with RHEL 6 and I solved just
 installing the hole X Window System and GNOME environment running then
 to runlevel 5. But now I don't want to install the hole GUI of my
 CentOS to solve this.

 Does anybody know why is this hapenning to bat? I hope someone can help me.

 Thanks, bye.


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bat through SSH X11 forwarding works parcially fine/bad

2011-05-06 Thread Jason Voorhees
Hi people:

I'm running Bacula 5.0.3 (compiled from source) on CentOS 5.6 x86_64.
I made a basic installation of CentOS without X Window, nor
GNOME/KDE/etc GUI, just pure console. I installed qt43, qt43-devel
(required to compile Bat) and xorg-x11-xauth (to be able to run X apps
trough SSH X11 forwarding).

My Bacula installation is working perfectly, but when I run bat
through a ssh x11 forwarding session from my Ubuntu desktop then Bat
seems to work fine, it loads correctly. I can use without problems the
Console, Clients, FileSets, Jobs, Pools, Media and
Storage sections, but Jobs Run, Version Browser and Director
Status don't seem to work. When I click on them the window just seem
to hung (similar to suspend a task from shell with ^Z).

I experienced this on a similar  system with RHEL 6 and I solved just
installing the hole X Window System and GNOME environment running then
to runlevel 5. But now I don't want to install the hole GUI of my
CentOS to solve this.

Does anybody know why is this hapenning to bat? I hope someone can help me.

Thanks, bye.

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
Hi:

I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
TS3100 with hardware compression enabled and software (Bacula)
compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
network and iperf tests report me a bandwidth of 112 MB/s.

I'm not using any spooling configuration and I'm running concurrent
jobs, just only one. This is the configuration of my fileset:

FileSet {
   Name = fset-qsrpsfs1
   Include {
   File = /etc
   File = /root
   File = /var/spool/cron
   File = /var/run/utmp
   File = /var/log
   File = /data
   Options {
   signature=SHA1
   #compression=GZIP
   }
   }
}

My backups were running with a minimum of 54 MB/s and a maximum of 79
MB/s. Are these speeds normal for my scenario? What could be the best
speed I could achieve with an optimal Bacula configuration considering
that the network isn't loaded at the time that backups always run? I'm
concerned the difference of speeds between the maximal supported by my
network (112 MB/s) versus the speed of my backups (54 ~ 79 MB/s).

I hope someone can give me some advices to enhance the performance of
my backups.

Thanks

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
Hi:

On Thu, Apr 28, 2011 at 10:19 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees jvoorhe...@gmail.com wrote:
 Hi:

 I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
 TS3100 with hardware compression enabled and software (Bacula)
 compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
 network and iperf tests report me a bandwidth of 112 MB/s.

 I'm not using any spooling configuration and I'm running concurrent
 jobs, just only one. This is the configuration of my fileset:

 FileSet {
       Name = fset-qsrpsfs1
       Include {
               File = /etc
               File = /root
               File = /var/spool/cron
               File = /var/run/utmp
               File = /var/log
               File = /data
               Options {
                       signature=SHA1
                       #compression=GZIP
               }
       }
 }

 My backups were running with a minimum of 54 MB/s and a maximum of 79
 MB/s. Are these speeds normal for my scenario?

 Is the source a raid? Do you have many small files?

 John


No, there are just a normal number of files from a shared folder of
my fileserver with spreadsheets, documents, images, PDFs, just
information of final users.

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
 The performance problem is probably filesystem performance. A single
 hard drive will only hit 100 MB/s if you are baking up files that are
 a few hundred MB.


 --
 John M. Drescher


How could I run some tests to verify this? I'm running MySQL server in
the same host where Bacula is installed. My server is a IBM Blade HS
with a local RAID 1.

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
On Thu, Apr 28, 2011 at 10:30 AM, John Drescher dresche...@gmail.com wrote:
 No, there are just a normal number of files from a shared folder of
 my fileserver with spreadsheets, documents, images, PDFs, just
 information of final users.


 The performance problem is probably filesystem performance. A single
 hard drive will only hit 100 MB/s if you are baking up files that are
 a few hundred MB.


 --
 John M. Drescher


Hi:

How can I know where's the bottleneck? I'm using an ext4 filesystem.
Are these tests useful?

[root@qsrpsbk1 ~]# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads:  370 MB in  3.01 seconds = 122.89 MB/sec
[root@qsrpsbk1 ~]# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3770 MB in  2.00 seconds = 1885.16 MB/sec
 Timing buffered disk reads:  370 MB in  3.00 seconds = 123.20 MB/sec

First I disabled 'signature=SHA1' at my Jobs and I gained little speed
of my backup (between 79 and 83 MB/s). Then I enabled Data Spooling
(so attribute spooling also is enabled) and my backups became slower
(between 36 and 45 MB/s).

All my tests were done running Full backups. The first large backups
were done without spooling and 4 TB of data, but my recent backups
were done with spooling enabled and 8 GB of data.

Would I need to modify settings of Buffer size in my Storage Daemon?
Is there any other reason for my slow backups? Are my backups really
slow or this speed is normal/good?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
On Thu, Apr 28, 2011 at 11:41 AM, John Drescher dresche...@gmail.com wrote:
 How can I know where's the bottleneck? I'm using an ext4 filesystem.
 Are these tests useful?

 [root@qsrpsbk1 ~]# hdparm -t /dev/sda

 /dev/sda:
  Timing buffered disk reads:  370 MB in  3.01 seconds = 122.89 MB/sec
 [root@qsrpsbk1 ~]# hdparm -tT /dev/sda

 /dev/sda:
  Timing cached reads:   3770 MB in  2.00 seconds = 1885.16 MB/sec
  Timing buffered disk reads:  370 MB in  3.00 seconds = 123.20 MB/sec


 That is expected for a hard drive purchased in 2010 or newer.


 First I disabled 'signature=SHA1' at my Jobs and I gained little speed
 of my backup (between 79 and 83 MB/s). Then I enabled Data Spooling
 (so attribute spooling also is enabled) and my backups became slower
 (between 36 and 45 MB/s).


 Your benchmark does not measure random or small file performance
 (smaller than a few MB). Any mechanical hard drive will not have  100
 MB/s for this. SSDs or raid will but not a regular hard drive.

 John


So do you believe these speeds of my backups are normal? I though my
Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
possible to achieve higher speeds?

I recently made an additional test modify the buffer size setting of
my storage daemon like this:

 Maximum Network Buffer Size = 262144 #65536
 Maximum block size = 262144

With these settings, signature=SHA1 disabled, data  attribute
spooling enabled and bacula software compression disabled I'm still
geting slow speeds.
I don't understand why the speed is so variable in each test I run. My
speed keeps being different each time with 32, 36, 45, 54, 67, ..., 80
MB/s when the backup Job is always the same (8 GB of data).

I'm looking at the speed of backups from the final report of the logs like:

  Build OS:   x86_64-unknown-linux-gnu redhat Enterprise release
  JobId:  19
  Job:job-qsrpsfs1test.2011-04-28_11.40.44_07
  Backup Level:   Full
  Client: qsrpsfs1 5.0.3 (04Aug10)
x86_64-unknown-linux-gnu,redhat,Enterprise release
  FileSet:fset-qsrpsfs1test 2011-04-28 10:06:18
  Pool:   FS1Weekly (From User input)
  Catalog:Default (From Client resource)
  Storage:TapeLibrary-TS3100 (From Pool resource)
  Scheduled time: 28-Apr-2011 11:40:42
  Start time: 28-Apr-2011 11:40:46
  End time:   28-Apr-2011 11:45:15
  Elapsed time:   4 mins 29 secs
  Priority:   10
  FD Files Written:   14,589
  SD Files Written:   14,589
  FD Bytes Written:   8,712,538,848 (8.712 GB)
  SD Bytes Written:   8,714,926,626 (8.714 GB)
  Rate:   32388.6 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): L5BA0008
  Volume Session Id:  1
  Volume Session Time:1304008831
  Last Volume Bytes:  52,326,931,456 (52.32 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

That Rate is the speed of the backup right?

:(

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
On Thu, Apr 28, 2011 at 12:01 PM, John Drescher dresche...@gmail.com wrote:
 So do you believe these speeds of my backups are normal? I though my
 Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
 possible to achieve higher speeds?

 You need to speed up your source filesystem to achieve better
 performance. Use raid10 or get a SSD. It has nothing at all to do with
 your tape or bacula speed if you hard drive can not read what it needs
 to backup at the maximum network speed. Or do not worry so much how
 much time a single backup is taking and enable concurrency and
 spooling. These will better utilize  the speed of your tape drive.


 John


strange!, I ran a hdparm test at the fileserver (the source of backup)
and I get a better performance:

[root@qsrpsfs1 ~]# hdparm -t /dev/mapper/mpath0

/dev/mapper/mpath0:
 Timing buffered disk reads:  622 MB in  3.00 seconds = 207.20 MB/sec

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
On Thu, Apr 28, 2011 at 1:43 PM, John Drescher dresche...@gmail.com wrote:
 On Thu, Apr 28, 2011 at 2:38 PM, John Drescher dresche...@gmail.com wrote:
 /dev/mapper/mpath0:
  Timing buffered disk reads:  622 MB in  3.00 seconds = 207.20 MB/sec

 That is a raid. But you still may not be able to sustain over 100MB/s
 of somewhat random reads. Remember that hdparm is only measuring
 sequential performance of large reads.


 If it is a raid I cold be wrong about the filesystem performance being
 the only problem. You can test if this is the case by creating a
 backup of a single file that is 1GB or so to media that is already
 loaded and see if the performance is closer to the network maximum.
 Maybe test an already compressed file of this size versus a file
 containing all zeros.

 John


I tried to copy a 10 GB file between both servers (Bacula and
Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
my backups are always near to that speed?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees

 to get the maximum speed with your LTO-5 drive you should enable data
 spooling and change the Maximum File Size parameter. The spool disk
 must be a fast one, especially if you want to run concurrent jobs.
 Forget hdparm as benchmark, use bonnie++, tiobench, iozone.

 Then after after you have enabled spooling, restarted the SD, start
 the backup. Look at the log file for Despooling elapsed time. This
 will show you how fast the spool file can be written to tape.

 The backup time will increase overall because the data will first be
 written to disk and then to tape, but at least you eliminate the
 network and the data source (server) as bottleneck.

 With spooling enabled and LTO-4 drives I get up 100 - 140 MB/s.

 Ralf


Hi, thanks for your help.

I think I was confusing some terms. The speed I reported was the total
elapsed time that my backup took. But now according to your comments I
got this from my logs:

With spooling enabled:

- Job write elapsed time: 102 MB/s average
- Despooling elapsed time: 84 MB/s average


Without spooling enabled:

- Job write elapsed time: 68 MB/s average

These are averages obtained from a group of 5 or more jobs of each
case (with and without spooling). So I can see that with spooling
enabled the process of writing to tape get higher speeds than
copy-from-fd/write-to-tape without spooling enabled.

Now the question is, why am I getting so low despooling speeds if I
use LTO-5 tapes? Shouldn't I have higher speeds than you with LTO-4
tapes?

I think the write speed of my tape it isn't the best. Some of my tests
(all with spooling enabled) were done with Buffer  Block size
settings at my storage daemon, but other tests were done without those
settings, so I didn't see any appreciable enhancement of performance
changing the value of block sizes.

What can I do?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees

 I got the biggest gain by changing Maximum File Size to 5 GB. How
 fast is the disk where you spool file is locatet?


Ok, I don't have that setting enabled but I could try it. Question:
how do you decide 5 GB is an optimal value for your LTO-4 tapes? what
value could I put for my LTO-5 tapes? I don't really understand what
should be the appropiate value for this directive.
I don't know how to tell you how speed is my local disk (spool
directory), but is this useful?


[root@qsrpsbk1 ~]# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads:  370 MB in  3.01 seconds = 123.00 MB/sec


 A different test would be to create a 10 GB file with data from
 /dev/urandom in the spool directory and the write this file to tape
 (eg. nst0). Note: this will overwrite your existing data on tape and
 you might have to release the drive in bacula.

 dd if=/spoolfile-directory/testfile of=/dev/nst0 bs=xxxk (your bacula block 
 size)

It will take some time to run this test because of the low speed of
/dev/unrandom but I'll show you the results on my next message.

By the way, do I really need to change the buffer/block size at the
Storage configuration? What buffer/block size should I use?


 Ralf


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Jason Voorhees
 I got the biggest gain by changing Maximum File Size to 5 GB. How
 fast is the disk where you spool file is locatet?

 A different test would be to create a 10 GB file with data from
 /dev/urandom in the spool directory and the write this file to tape
 (eg. nst0). Note: this will overwrite your existing data on tape and
 you might have to release the drive in bacula.

 dd if=/spoolfile-directory/testfile of=/dev/nst0 bs=xxxk (your bacula block 
 size)


 Ralf

Ok, I made this:
[root@qsrpsbk1 spool]# dd if=/dev/urandom of=random.tst bs=1M count=10240

[root@qsrpsbk1 spool]# dd if=random.tst of=/dev/st0 bs=2048k count=5000
5000+0 records in
5000+0 records out
1048576 bytes (10 GB) copied, 160.232 s, 65.4 MB/s

The block size of 2048k is the same I'm using at Bacula Storage Daemon
configuration (Maximum Block Size = 2097152). Why the speed of dd was
65 MB/s when my Despooling speed is about ~80-90 MB/s?

Well, these are my results of a bonnie++ test:

[root@qsrpsbk1 ~]# bonnie++ -d /opt/spool/ -u bacula
Using uid:500, gid:33.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03e   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
qsrpsbk1.qui 36200M 77100  96 108223  18 48701   7 77012  93 142733
8 681.7   0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 + +++ + +++ + +++ + +++ + +++ + +++
qsrpsbk1.mydomain.com,36200M,77100,96,108223,18,48701,7,77012,93,142733,8,681.7,0,16,+,+++,+,+++,+,+++,+,+++,+,+++,+,+++


What do you think about these tests?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] About tapes capacity

2011-04-20 Thread Jason Voorhees
Hi people:

I'm running Bacula 5.0.3 with an tape library IBM TS3100 using LTO5
tapes of 1.5 TB capacity each one. I have a simple question: If I run
a job backup of a server that holds more than 1.5 TB (example: 2 TB of
data), what happen when Bacula reaches the limit of the tape capacity?
Does Bacula automatically change the tape?

I'm not pretty sure about this because I recently ran a job backup of
2 TB and Bacula apparently just used one tape: have a look at my
volume listing:

* list volume
...
...
|   5 | L5BA0005   | Append|   1 | 2,058,019,117,056 |
2,059 |2,160,000 |   1 |5 | 1 | LTO-5 |
2011-04-19 20:58:51 |
+-++---+-+---+--+--+-+--+---+---+-+

When the job started Bacula took the tape L5BA0005 and never required
another one. As you can see Bacula apparently wrote 2,058,019,117,056
bytes (approx. 2 TB) but my tape is just 1.5 TB capacity. Does anybody
know why? My job isn't using compression so I really don't understand
this.
Just for additional information there are a lot of new tapes on my
magazines. Have a look at this:

* list volume
Pool: Scratch
+-++---+-+--+--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
 |
+-++---+-+--+--+--+-+--+---+---+-+
|   6 | L5BA0006   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |6 | 1 | LTO-5 | -00-00
00:00:00 |
|   7 | L5BA0007   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |7 | 1 | LTO-5 | -00-00
00:00:00 |
|   8 | L5BA0008   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |8 | 1 | LTO-5 | -00-00
00:00:00 |
|  12 | L5BA0001   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  13 | L5BA0009   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |9 | 1 | LTO-5 | -00-00
00:00:00 |
|  14 | L5BA0010   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  15 | L5BA0011   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  16 | L5BA0012   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  17 | L5BA0013   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  18 | L5BA0014   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  19 | L5BA0015   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  20 | L5BA0016   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  21 | L5BA0017   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  22 | L5BA0018   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  23 | L5BA0019   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
|  24 | L5BA0020   | Append|   1 |   64,512 |0 |
31,536,000 |   1 |0 | 0 | LTO-5 | -00-00
00:00:00 |
+-++---+-+--+--+--+-+--+---+---+-+

So I don't know why Bacula didn't take a free tape of Scratch pool. I
hope someone can help me to understand this Bacula behavior.

Thanks, bye.

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About tapes capacity

2011-04-20 Thread Jason Voorhees
On Wed, Apr 20, 2011 at 9:22 AM, Paul Mather p...@gromit.dlib.vt.edu wrote:
 On Apr 20, 2011, at 10:17 AM, Jason Voorhees wrote:

 Hi people:

 I'm running Bacula 5.0.3 with an tape library IBM TS3100 using LTO5
 tapes of 1.5 TB capacity each one. I have a simple question: If I run
 a job backup of a server that holds more than 1.5 TB (example: 2 TB of
 data), what happen when Bacula reaches the limit of the tape capacity?
 Does Bacula automatically change the tape?

 I'm not pretty sure about this because I recently ran a job backup of
 2 TB and Bacula apparently just used one tape: have a look at my
 volume listing:

 * list volume
 ...
 ...
 |       5 | L5BA0005   | Append    |       1 | 2,058,019,117,056 |
 2,059 |    2,160,000 |       1 |    5 |         1 | LTO-5     |
 2011-04-19 20:58:51 |
 +-++---+-+---+--+--+-+--+---+---+-+

 When the job started Bacula took the tape L5BA0005 and never required
 another one. As you can see Bacula apparently wrote 2,058,019,117,056
 bytes (approx. 2 TB) but my tape is just 1.5 TB capacity. Does anybody
 know why? My job isn't using compression so I really don't understand
 this.


 The job may not be using compression but the tape drive itself may have 
 compression enabled, which could account for being able to write 2 TB to the 
 LTO-5 tape.

 (The 1.5 TB capacity is the native capacity of an LTO-5 tape without hardware 
 compression enabled.)

 Cheers,

 Paul.



Hi, excuse me if I'm a newbie on this but, how and where can I verify
if hardware compression is enabled? It would be on the web
administration of my tape library?

Thanks

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Experiences with Autochanger IBM TS3100

2010-10-07 Thread Jason Voorhees
On Wed, Oct 6, 2010 at 11:59 PM, Buskas, Patric
patric.bus...@capgemini.com wrote:
 Hi,

 I'm using the TS3100 with a CentOS 5.5 and Bacula 5.0.3 and it's working 
 great.
 I don't think there are any mainstream Linux dist who won't work with this 
 auto changer unless they're too old.
 It works great with the mtx command also.

 /Patric

Thanks a lot for your comments, this gives me trust on buying that
autochanger soon.

I hope someone who had experiences with this hardware using bacula 
linux can share some words to this thread.

Thanks again, bye.

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Experiences with Autochanger IBM TS3100

2010-10-06 Thread Jason Voorhees
Hi:

We're planning to buy a autochanger IBM T3100 but I would like you to
give me some suggestion about this model? Do you know if it's fully
supported with modern Linux distributions? Is it necessary some recent
kernel version? Is it difficult to configure? Would you recommend a
better linux distribution that support this device by default?

I hope someone can share with me some opinions before buy it. Thanks

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users