Andreas Bogacki schrieb: > Hi, > > I seem to have found a problem with multiple Run statments in a Job > resource. > My setup is a bit strange due to too little storage in one place so I > set up some migrate jobs. > > scheduled backup1 Job writes to filestorage1 > migrate1 Job moves from filestorage1 to filestorage2 based on volumetime > migrate2 Job moves from filestorage2 to filestorage3 based on volumetime > migrate3 Job moves from filestorage3 to filestorage4 based on volumetime > > The prefered execution order would be migrate3, migrate2, migrate1, backup1. > Using priorities to get that order is not practical for there are lots > of other backup jobs. (last time I tried I had a Weekend full of jobs > waiting for one mount request) > > Run is not recursive so I tryed to use multiple run statements in the > backup1 Job. > This lead to the director spawning a massive amount of those migrate > jobs (not 3 as expected but 100+). I suppose this behaviour is somewhat > wrong. > > Is there any way to define dependecies between jobs that go deeper than > just "start this single job befor this one is run" or will I have to > create staggered schedules with all jobs having the same priority. > > thanks for your help > Andreas Bogacki > > forgot to provide some details: debian lenny bacula 2.4.4 (28 December 2008) x86_64-pc-linux-gnu debian lenny/sid
the spawning of those jobs causes a Too many open files Error at some point: message.c:589 fopen /var/log/bacula/log failed: ERR=Too many open files and then the director segfaults. ------------------------------------------------------------------------------ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users