Hi Bruno,
Am Freitag, 9. März 2018 19:28:34 UTC+1 schrieb Bruno Friedmann:
> I guess you should invest so time to read and setup spooling.
> I'm using that and I bundle easily 10 to 30 jobs on one media.
That would (as far as I understand it) mean writing multiple jobs to the same
Volume. As
On mardi, 6 mars 2018 10.48:04 h CET Martin Emrich wrote:
> Am Dienstag, 6. März 2018 01:23:45 UTC+1 schrieb Stefan Klatt:
> > I have a few comments.
> >
> > - Do you have really monolithic config files? They are bad to read
> > and old school :-).
>
> In reality the config files
Hi!
Am Donnerstag, 8. März 2018 01:24:33 UTC+1 schrieb Stefan Klatt:
> I found only one small point. Updates generated here everytime
> default files like "BackUpCatalog.conf" or "bareos-dir.conf". I
> ignore the not needed files and set them to 1 Byte ("#").
We store the config in
Hallo Martin,
> I have a success, see below :)
That's fantastic!
>> Since 16.2 you don't need the @ operator any more if you use the new
>> directory structure.
> Yes, the config tree stems back to good ol' Bacula 5 times... I did not get
> around to "migrate" to the new bareos-dir.d
Hi!
I have a success, see below :)
Am Dienstag, 6. März 2018 12:05:09 UTC+1 schrieb Stefan Klatt:
> Since 16.2 you don't need the @ operator any more if you use the new
> directory structure.
Yes, the config tree stems back to good ol' Bacula 5 times... I did not get
around to
Hello Martin,
>> I have a few comments.
>>
>> - Do you have really monolithic config files? They are bad to read
>> and old school :-).
> In reality the config files are split via the @ operator (one file for pools,
> one for storage, one for job templates and schedules, and one for
Hi Martin,
>> I think we need to review your configuration
> Apparently :) I have attached a stripped-down, sanitized example.
Theoretical it should work.
I have a few comments.
- Do you have really monolithic config files? They are bad to read and
old school :-).
- I didn't know that the
Hi!
Am Sonntag, 4. März 2018 02:15:54 UTC+1 schrieb Stefan Klatt:
> I think we need to review your configuration
Apparently :) I have attached a stripped-down, sanitized example.
Thanks!
Ciao
Martin
--
You received this message because you are subscribed to the Google Groups
Hello Martin,
>> I might have a clue, I had a "Pool" and "Storage" statement in the top-level
>> jobDefs (although I use "Full Backup Pool", "Differential Backup Pool" etc.
>> to set the final target pool.
>> Maybe that throws the scheduler off, I removed Pool and Storage from the
>> top-level
Am Mittwoch, 28. Februar 2018 15:19:33 UTC+1 schrieb Martin Emrich:
> I might have a clue, I had a "Pool" and "Storage" statement in the top-level
> jobDefs (although I use "Full Backup Pool", "Differential Backup Pool" etc.
> to set the final target pool.
> Maybe that throws the scheduler off,
Am Dienstag, 27. Februar 2018 21:16:12 UTC+1 schrieb Stefan Klatt:
>
> Do you mean with director the job definitions?
>
> Do you use different job definitions? If not do you use "Allow
> Duplicate Jobs"?
>
I have one master "jobDefs". Then I have tree for the three Pool sets,
Hello Martin
>> Probably a problem with the job priority?
> Hmm can you elaborate? I have "Allow Mixed Priority" already set to yes
> (tried the default "no" before, too).
> Otherwise, all Jobs have the same priority 10.
Do you mean with director the job definitions?
Do you use different
Hi Stefan!
Am Montag, 26. Februar 2018 19:20:21 UTC+1 schrieb Stefan Klatt:
>
> Probably a problem with the job priority?
Hmm can you elaborate? I have "Allow Mixed Priority" already set to yes (tried
the default "no" before, too).
Otherwise, all Jobs have the same priority 10.
Hi Martin,
Am 26.02.2018 um 15:43 schrieb Martin Emrich:
> Hi!
>
> I am trying to get parallel jobs working, but it just does not work:
>
> * I have separate pools, on separate devices
> * I have set "maximum concurrent jobs" set on the director (dir and storages)
> and on the storage (storage
Hi!
I am trying to get parallel jobs working, but it just does not work:
* I have separate pools, on separate devices
* I have set "maximum concurrent jobs" set on the director (dir and storages)
and on the storage (storage and devices)
But when I start three jobs (each configured for a
15 matches
Mail list logo