Re: [Bacula-users] multiple spool files per job

2011-10-08 Thread Ralf Gross
James Harper schrieb:
> Is there a way to make bacula write multiple spool files per job? Two
> would do. What I'm seeing is that 4 jobs start, all hit their spool
> limit around the same time, then all wait in a queue until the file is
> despooled. The despool happens fairly quickly (much quicker than the
> spooling due to network and server fd throughput) so it isn't a huge
> problem, but it would be better if the sd could just switch over to
> another spool file when despooling starts so that the backup can
> continue uninterrupted.
> 
> I'm spooling to internal RAID, then despooling to external USB. While
> spooling isn't really advised when the backup target is a disk, doing it
> this way means I can run multiple jobs at once without causing
> interleaving in the backup file (single sd volume) or severe filesystem
> fragmentation (if one sd volume per job). Internal RAID writes at
> ~100MB/second while the USB disk writes at ~30MB/second so it turns out
> to be a pretty effective way to do what I want except that despooling is
> causing a bottleneck.
> 
> Any suggestions?

No, this has been on the feature requests list for a while now.
Spooling nearly doubles my time for large backups. 

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/15351

Item 10: Concurrent spooling and despooling within a single job.
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

Ralf

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] multiple spool files per job

2011-10-08 Thread James Harper
Is there a way to make bacula write multiple spool files per job? Two
would do. What I'm seeing is that 4 jobs start, all hit their spool
limit around the same time, then all wait in a queue until the file is
despooled. The despool happens fairly quickly (much quicker than the
spooling due to network and server fd throughput) so it isn't a huge
problem, but it would be better if the sd could just switch over to
another spool file when despooling starts so that the backup can
continue uninterrupted.

I'm spooling to internal RAID, then despooling to external USB. While
spooling isn't really advised when the backup target is a disk, doing it
this way means I can run multiple jobs at once without causing
interleaving in the backup file (single sd volume) or severe filesystem
fragmentation (if one sd volume per job). Internal RAID writes at
~100MB/second while the USB disk writes at ~30MB/second so it turns out
to be a pretty effective way to do what I want except that despooling is
causing a bottleneck.

Any suggestions?

Thanks

James



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users