You could throttle your quicker jobs by inventing a "quickie" resource 
which is only available on N of your 6 agents. This means that 6-N agents 
will be reserved for the slower jobs.

E.g. if 8 jobs take 10 minutes each, and 12 jobs take 2 minutes each. If 
the 2 minute jobs need the QUICK resource, and only 2 agents provide that, 
you would probably finish in 20 minutes, since 4 agents will be reserved 
for slow jobs.

It might also be a good thing to have a SLOW resource for the slow jobs. 
Particularly if you have slow system test pipelines which might otherwise 
land on all agents and make those tiny builds wait "forever".

Den onsdag 9 augusti 2017 kl. 17:00:02 UTC+2 skrev Henrique Lemos Ribeiro:
>
> I have 20 Jobs and 6 agents witch have the resources need for them.
>
> They can run in parallel without a problem. Some of them take more time 
> than the others, then, sometimes that leads to a bigger elapsed stage time 
> when a more time consuming job starts latter.
>
> So, if I could specify a 'order/priority' to them, they could finish 
> earlier, in average.
>
> is there a way to that using the Config XML?
>
> I did not find any options or documentation about it.
>
>
> Em terça-feira, 17 de março de 2015 20:49:51 UTC-3, Michael Maley escreveu:
>>
>> Thanks!
>>
>> On Tuesday, March 17, 2015 at 1:44:10 PM UTC-7, Aravind SV wrote:
>>>
>>> No, jobs are inherently parallel, which is why they don't have any 
>>> ordering. If you have two agents available, for instance, they will both 
>>> pick up a job each, at the same time.
>>>
>>> It feels like you need multiple stages, the database one being the first 
>>> and the client one coming up next. That will allow you to model your 
>>> deployment properly, I would think.
>>>
>>> Cheers,
>>> Aravind
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to