Re: [Bacula-users] Maximum Volume Bytes without effect

2017-03-03 Thread Josip Deanovic
On Friday 2017-03-03 16:22:01 Marian Neubert wrote:
> Hi all,
> 
> i'm facing an issue with an setup, where Maximum Volume Bytes seems to
> have no effect at all.
> The Pool RemoteFile has only one Volume Remote-0009 with current size of
> nearly 1TB, Status "Append". But the total Byte Size (usage) ist only
> about 300GB.
> 
> Could you give me a hint, whats wrong with my config and why the
> creation of new Volumes didn't happen?
> 
> 
> Pool-definition:
> 
> Pool {
>Name = RemoteFile
>Pool Type = Backup
>Label Format = Remote-
>Recycle = yes
>AutoPrune = yes
>Volume Retention = 180 days
>Maximum Volume Bytes = 100G
>Maximum Volumes = 5
>Action On Purge = Truncate
> }

Hi Marian,

Your pool resource definition looks fine to me.

If you have changed the pool configuration you will have to update
its configuration using the update command from bconsole.
Bacula will do it automatically only when your define a specific
pool resource for the first time. Any subsequent change requires
the use of update command.


-- 
Josip Deanovic

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Volume Bytes without effect

2017-03-03 Thread Marian Neubert
Hi all,

i'm facing an issue with an setup, where Maximum Volume Bytes seems to 
have no effect at all.
The Pool RemoteFile has only one Volume Remote-0009 with current size of 
nearly 1TB, Status "Append". But the total Byte Size (usage) ist only 
about 300GB.

Could you give me a hint, whats wrong with my config and why the 
creation of new Volumes didn't happen?


Pool-definition:

Pool {
   Name = RemoteFile
   Pool Type = Backup
   Label Format = Remote-
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 180 days
   Maximum Volume Bytes = 100G
   Maximum Volumes = 5
   Action On Purge = Truncate
}


Client and Job-definition:

Client {
   Name = fm-addc-fd
   Address = fm-addc.home.netz
   FDPort = 9102
   Catalog = MyCatalog
   Password = "NotListedHere"
   File Retention = 30 days
   Job Retention = 3 months
   AutoPrune = yes
}

Job {
   Name = "Backup_fm-addc"
   JobDefs = "DefaultJob"
   Client = fm-addc-fd
   Pool = RemoteFile
   FileSet="fm-addc - Full Set"
}

Thanks in advance!
/Marian

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-24 Thread ewan.brown
For what it's worth, I have a similar amount of data and settled on a Maximum 
Volume Bytes of 200G which has been working without any issues for a few 
months now.

Cheers,

Ewan

 -Original Message-
 From: Mike Seda [mailto:mas...@stanford.edu]
 Sent: 20 May 2011 18:49
 To: Bacula Users
 Subject: [Bacula-users] Maximum Volume Bytes
 
 Hi All,
 I'm currently setting up a disk-based storage pool in Bacula and am
 wondering what I should set Maximum Volume Bytes to. I was thinking
 of
 setting it to 100G, but am just wondering if this is sane.
 
 FYI, the total data of our clients is 15 TB, but we are told that this
 data should at least double each year.
 
 I noticed that there seems to be a limit on the number of disk-based
 volumes in a pool due to the suffix having 4 digits, i.e. 0001. This
 adds up to about 10,000 possible volumes per pool. So 10,000 volumes x
 100 GB is 1 PB. That seems like overkill. Perhaps setting Maximum
 Volume Bytes = 10G would be more reasonable since this would add up to
 100 TB.
 
 I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am
 wondering if smaller volumes will dedup better than larger ones. I'm
 curious to see what others are doing to take advantage of dedup-enabled
 ZFS storage w/ Bacula.
 
 Thanks,
 Mike
 
 ---
 ---
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its
 next-generation tools to help Windows* and Linux* C/C++ and Fortran
 developers boost performance applications - including clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
-- 
Scanned by iCritical.

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Mike Seda
Hi All,
I'm currently setting up a disk-based storage pool in Bacula and am 
wondering what I should set Maximum Volume Bytes to. I was thinking of 
setting it to 100G, but am just wondering if this is sane.

FYI, the total data of our clients is 15 TB, but we are told that this 
data should at least double each year.

I noticed that there seems to be a limit on the number of disk-based 
volumes in a pool due to the suffix having 4 digits, i.e. 0001. This 
adds up to about 10,000 possible volumes per pool. So 10,000 volumes x 
100 GB is 1 PB. That seems like overkill. Perhaps setting Maximum 
Volume Bytes = 10G would be more reasonable since this would add up to 
100 TB.

I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am 
wondering if smaller volumes will dedup better than larger ones. I'm 
curious to see what others are doing to take advantage of dedup-enabled 
ZFS storage w/ Bacula.

Thanks,
Mike

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Mike Seda
All,
Nevermind about dedup with Bacula. It seems that the current block 
format doesn't work too well with it:
http://changelog.complete.org/archives/5547-research-on-deduplicating-disk-based-and-cloud-backups

I'm getting descent compression rates though with LZJB (compression=on), 
which makes it compelling enough to stick with ZFS.

My original question of recommended Maximum Volume Bytes size still 
stands though. The documentation seems to recommend 50 GB, but we need 
to backup 15 TB of data. I'm just wondering if that changes things or not.

Cheers,
Mike


On 05/20/2011 10:48 AM, Mike Seda wrote:
 Hi All,
 I'm currently setting up a disk-based storage pool in Bacula and am
 wondering what I should set Maximum Volume Bytes to. I was thinking of
 setting it to 100G, but am just wondering if this is sane.

 FYI, the total data of our clients is 15 TB, but we are told that this
 data should at least double each year.

 I noticed that there seems to be a limit on the number of disk-based
 volumes in a pool due to the suffix having 4 digits, i.e. 0001. This
 adds up to about 10,000 possible volumes per pool. So 10,000 volumes x
 100 GB is 1 PB. That seems like overkill. Perhaps setting Maximum
 Volume Bytes = 10G would be more reasonable since this would add up to
 100 TB.

 I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am
 wondering if smaller volumes will dedup better than larger ones. I'm
 curious to see what others are doing to take advantage of dedup-enabled
 ZFS storage w/ Bacula.

 Thanks,
 Mike

 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its
 next-generation tools to help Windows* and Linux* C/C++ and Fortran
 developers boost performance applications - including clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Dennis Hoppe
Hello Mike,

Am 20.05.2011 19:48, schrieb Mike Seda:
 I'm currently setting up a disk-based storage pool in Bacula and am 
 wondering what I should set Maximum Volume Bytes to. I was thinking of 
 setting it to 100G, but am just wondering if this is sane.

i think you should use the parameter Volume Use Duration or Use
Volume Once. The Parameter Maximum Volume Bytes make only sense if
your are using a DVD as media.

 FYI, the total data of our clients is 15 TB, but we are told that this 
 data should at least double each year.
 
 I noticed that there seems to be a limit on the number of disk-based 
 volumes in a pool due to the suffix having 4 digits, i.e. 0001. This 
 adds up to about 10,000 possible volumes per pool. So 10,000 volumes x 
 100 GB is 1 PB. That seems like overkill. Perhaps setting Maximum 
 Volume Bytes = 10G would be more reasonable since this would add up to 
 100 TB.
 
 I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am 
 wondering if smaller volumes will dedup better than larger ones. I'm 
 curious to see what others are doing to take advantage of dedup-enabled 
 ZFS storage w/ Bacula.

Regards, Dennis



signature.asc
Description: OpenPGP digital signature
--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread John Drescher
 Am 20.05.2011 19:48, schrieb Mike Seda:
 I'm currently setting up a disk-based storage pool in Bacula and am
 wondering what I should set Maximum Volume Bytes to. I was thinking of
 setting it to 100G, but am just wondering if this is sane.


I think 100G will be fine. The size depends on your situation. You
want to have more than a few volumes (for better recycling efficiency)
but less than a few thousand volumes.


 i think you should use the parameter Volume Use Duration or Use
 Volume Once. The Parameter Maximum Volume Bytes make only sense if
 your are using a DVD as media.


I think that is just a different strategy, either will work. I use
Maximum Volume Bytes for disk volumes at home and at work. For me I
limit the size to 5GB and use the disk volumes with the bacula virtual
changer but I do not have many disk volumes. At work I use disk
volumes mainly for the catalogs and have the real 30+ TB backup to LTO
tape.

John

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Phil Stracchino
On 05/20/11 13:48, Mike Seda wrote:
 Hi All,
 I'm currently setting up a disk-based storage pool in Bacula and am 
 wondering what I should set Maximum Volume Bytes to. I was thinking of 
 setting it to 100G, but am just wondering if this is sane.

It depends.

There are various ways to control usage of disk-based volumes.  Maximum
volume bytes is not necessarily the best one.  If you do choose to do it
that way, there are a variety of factors to take into account that can
influence your choice of volume size.  Smaller volumes waste less space
is some of the jobs in them are purged.  Larger volumes make for faster
backups because there are less volume changes.  If you ever need to copy
volumes to another medium, you pretty much need the volumes not to be
larger than the medium.

I personally find that it is more useful, rather than fixing the size of
volumes, to limit their use duration or the number of jobs they can
hold, so that rather than being arbitrary chunks of storage, a volume is
tied to a specific job or jobs, and can be freed when those jobs are
purged without hanging onto any space that is no longer in use, but
can't be recovered until the last job on the volume is purged.


 I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am 
 wondering if smaller volumes will dedup better than larger ones. I'm 
 curious to see what others are doing to take advantage of dedup-enabled 
 ZFS storage w/ Bacula.

I have not experimented with ZFS deduplication.  ZFS deduplication is
implemented at block level, so the size of the file is unlikely to make
a great deal of difference.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Phil Stracchino
On 05/20/11 14:38, Mike Seda wrote:
 All,
 Nevermind about dedup with Bacula. It seems that the current block 
 format doesn't work too well with it:
 http://changelog.complete.org/archives/5547-research-on-deduplicating-disk-based-and-cloud-backups
 
 I'm getting descent compression rates though with LZJB (compression=on), 
 which makes it compelling enough to stick with ZFS.
 
 My original question of recommended Maximum Volume Bytes size still 
 stands though. The documentation seems to recommend 50 GB, but we need 
 to backup 15 TB of data. I'm just wondering if that changes things or not.


Mike,
FYI, I set up my Bacula disk volumes such that ALL OF a single night's
backup jobs, regardless of level, and ONLY that single night's backup
jobs, go into a single Bacula volume.  (Except for my main server, which
is backed up directly to LTO2 tape.)  At this moment in time, the
smallest volume in spool/bacula (yes, it's a Solaris 10 box with a
12-disk ZFS array) is 5.8GB; the largest, 201GB.  I have seen disk
volumes as large as 450GB in the past.  I have not run into any problems
with this scheme and with this range of volume sizes.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users