Sorry for the late reply.
I did lose this thread out of sight due to serious workload going on here...
Mike got it right. The jobs will stay in the build queue and wait for a 
machine even if 15 executors on other machines are ready to go.
The approach Mike suggests is not really feasible. There are 200 jobs 
running on the cluster. It would be insane to set up slaves with 1 or a few 
executors and restrict the jobs. I would never stop to reconfigure as the 
setup changes in a very agile manner.
And Jenkins is overstressed with too many slaves on one machine. I am 
facing troubles with offline slaves if a machine has 8-10 slaves running.
The result is that I have to start them manually -.- So more slaves on one 
machine is not the solution. Rather fewer with more executors to make sure 
an executor is available at all times.
But that is IMHO a bit insane and pretty upsetting to see one or two 
machines drowning in build jobs while the other also could compute for SETI 
as they ain't utilized.

I am now trying to reconfigure the entire job topology to get as many jobs 
of a specific type running on one machine with also comes along with 
restrictions. Still no joy but I may be able to make it a bit more comfy.

Any ideas would still be appreciate though.

Cheers
Jan

Am Donnerstag, 5. April 2012 11:59:07 UTC+2 schrieb Jan Seidel:
>
> Hi there, 
>
> my question is already stated in the title as you can see :) 
>
> I know that you can let jobs "roam" in a node cluster but can you let 
> it REALLY ROAM? 
> Jenkins tries to let jobs build on nodes which already have been used 
> for building that particular job. 
> That clutters some build queues while other nodes are picking nose. 
>
> The idea was probably to preserve disk space. But I don't need that 
> intention. "Unimportant jobs" delete their entire workspace upon 
> finish while the important ones store everything until next run. These 
> important jobs have a separate harddisk with loads of space. 
>
> I have not only several executors running on each sever but also up to 
> 3 instances of jenkins slaves for better usage of system ressources 
> and to box very special jobs. Each slave instance is located on its 
> own harddisk. 
> That way do the special jobs and the slaves have exclusive access to 
> ressources and the jobs may roam in their very own realm. 
> Sounds a bit weird but works perfect except for this '*%&"%ยง# 
> preferences to build on the same node that has build the job before. 
>
> The excessive use of the harddisk slows all the builds in a senseless 
> way as the bus reaches the capacity limit on spikes which happen if 
> several jobs spawn at the same time and updates their workspace while 
> there still are loads of unused ressources available on other machines 
> -.- 
>
> I see at the moment just one solution: split the cluster into more 
> slaves with less executors and reassign the jobs. 
> But that is counteracting my idea a bit as this turns from performance 
> improvement, scalability and convenient usability to further 
> performance improvement and alleviated administration. 
>
> Has someone an idea how to remove this preference of Jenkins and 
> simply let the jobs build where most executors are available? 
>
> Cheers 
> Jan

Reply via email to