Re: [Bacula-users] ZFS and Bacula

2010-10-09 Thread Henrik Johansen
'Phil Stracchino' wrote: >On 10/09/10 07:31, Henrik Johansen wrote: >> $ dtrace -n 'sysinfo:::writech / execname == "bacula-sd" / { >> @dist[execname] = quantize(arg0); }' >> >> dtrace: description 'sysinfo:::writech ' matched 4 probes >> ^C >> >>bacula-sd >> value -

Re: [Bacula-users] ZFS and Bacula

2010-10-09 Thread Phil Stracchino
On 10/09/10 07:31, Henrik Johansen wrote: > $ dtrace -n 'sysinfo:::writech / execname == "bacula-sd" / { > @dist[execname] = quantize(arg0); }' > > dtrace: description 'sysinfo:::writech ' matched 4 probes > ^C > >bacula-sd > value - Distribution - count

Re: [Bacula-users] ZFS and Bacula

2010-10-09 Thread Henrik Johansen
'Phil Stracchino' wrote: >On 10/07/10 13:47, Roy Sigurd Karlsbakk wrote: >> Hi all >> >> I'm planning a Bacula setup with ZFS on the SDs (media being disk, >> not tape), and I just wonder - should I use a smaller recordsize (aka >> largest block size) than the default setting of 128kB? > >Actually,

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread James Harper
> > I have been using this setup for awhile. You absolutely must disable Bacula > compression on the ZFS Devices within the SD or for the specific Pools that > have volumes on the ZFS. Doubling up encryption can actually increase file > sizes and also lead to data errors. > It can _not_ lead to

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Phil Stracchino
On 10/07/10 13:47, Roy Sigurd Karlsbakk wrote: > Hi all > > I'm planning a Bacula setup with ZFS on the SDs (media being disk, > not tape), and I just wonder - should I use a smaller recordsize (aka > largest block size) than the default setting of 128kB? Actually, there are arguments in favor of

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Henrik Johansen
'Roy Sigurd Karlsbakk' wrote: >Hi all > >I'm planning a Bacula setup with ZFS on the SDs (media being disk, not >tape), and I just wonder - should I use a smaller recordsize (aka >largest block size) than the default setting of 128kB? Setting the recordsize to 64k has worked well for us so far. I

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread John Drescher
On Thu, Oct 7, 2010 at 2:33 PM, Roy Sigurd Karlsbakk wrote: >> If the data coming from bacula are already compressed by the bacula-fd >> there's little space for improvement. >> In your type of setup, I would disable compression on bacula-fd >> increasing the speed of backup, your sd doing it by z

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Mingus Dew
I have been using this setup for awhile. You absolutely must disable Bacula compression on the ZFS Devices within the SD or for the specific Pools that have volumes on the ZFS. Doubling up encryption can actually increase file sizes and also lead to data errors. -Shon On Thu, Oct 7, 2010 at 2:11

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Roy Sigurd Karlsbakk
> If the data coming from bacula are already compressed by the bacula-fd > there's little space for improvement. > In your type of setup, I would disable compression on bacula-fd > increasing the speed of backup, your sd doing it by zfs mechanism. thing is, I can't find anything about compression

Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Bruno Friedmann
On 10/07/2010 07:47 PM, Roy Sigurd Karlsbakk wrote: > Hi all > > I'm planning a Bacula setup with ZFS on the SDs (media being disk, not tape), > and I just wonder - should I use a smaller recordsize (aka largest block > size) than the default setting of 128kB? > > Also, last I tried, with ZFS o

[Bacula-users] ZFS and Bacula

2010-10-07 Thread Roy Sigurd Karlsbakk
Hi all I'm planning a Bacula setup with ZFS on the SDs (media being disk, not tape), and I just wonder - should I use a smaller recordsize (aka largest block size) than the default setting of 128kB? Also, last I tried, with ZFS on a test box, I enabled compression, the lzjb algorithm (very lig