Re: Is it possible to let the jobs roam in a node cluster?

2013-01-23 Thread Stephen Connolly
Not my call.


On 23 January 2013 12:13, liam.j.bennett  wrote:

> Any chance of open sourcing it?
>
>
> On Tuesday, January 22, 2013 10:57:52 PM UTC, Stephen Connolly wrote:
>>
>> My employers have an enterprise plugin that implements an even load
>> strategy, whereby unused slaves are preferred to slaves where the project
>> previously built.
>>
>>


Re: Is it possible to let the jobs roam in a node cluster?

2013-01-23 Thread liam.j.bennett
Any chance of open sourcing it?

On Tuesday, January 22, 2013 10:57:52 PM UTC, Stephen Connolly wrote:
>
> My employers have an enterprise plugin that implements an even load 
> strategy, whereby unused slaves are preferred to slaves where the project 
> previously built. 
>
>

Re: Is it possible to let the jobs roam in a node cluster?

2013-01-22 Thread Stephen Connolly
My employers have an enterprise plugin that implements an even load
strategy, whereby unused slaves are preferred to slaves where the project
previously built.

On Monday, 21 January 2013, Jan Seidel wrote:

> Sorry for the late reply.
> I did lose this thread out of sight due to serious workload going on
> here...
> Mike got it right. The jobs will stay in the build queue and wait for a
> machine even if 15 executors on other machines are ready to go.
> The approach Mike suggests is not really feasible. There are 200 jobs
> running on the cluster. It would be insane to set up slaves with 1 or a few
> executors and restrict the jobs. I would never stop to reconfigure as the
> setup changes in a very agile manner.
> And Jenkins is overstressed with too many slaves on one machine. I am
> facing troubles with offline slaves if a machine has 8-10 slaves running.
> The result is that I have to start them manually -.- So more slaves on one
> machine is not the solution. Rather fewer with more executors to make sure
> an executor is available at all times.
> But that is IMHO a bit insane and pretty upsetting to see one or two
> machines drowning in build jobs while the other also could compute for SETI
> as they ain't utilized.
>
> I am now trying to reconfigure the entire job topology to get as many jobs
> of a specific type running on one machine with also comes along with
> restrictions. Still no joy but I may be able to make it a bit more comfy.
>
> Any ideas would still be appreciate though.
>
> Cheers
> Jan
>
> Am Donnerstag, 5. April 2012 11:59:07 UTC+2 schrieb Jan Seidel:
>>
>> Hi there,
>>
>> my question is already stated in the title as you can see :)
>>
>> I know that you can let jobs "roam" in a node cluster but can you let
>> it REALLY ROAM?
>> Jenkins tries to let jobs build on nodes which already have been used
>> for building that particular job.
>> That clutters some build queues while other nodes are picking nose.
>>
>> The idea was probably to preserve disk space. But I don't need that
>> intention. "Unimportant jobs" delete their entire workspace upon
>> finish while the important ones store everything until next run. These
>> important jobs have a separate harddisk with loads of space.
>>
>> I have not only several executors running on each sever but also up to
>> 3 instances of jenkins slaves for better usage of system ressources
>> and to box very special jobs. Each slave instance is located on its
>> own harddisk.
>> That way do the special jobs and the slaves have exclusive access to
>> ressources and the jobs may roam in their very own realm.
>> Sounds a bit weird but works perfect except for this '*%&"%§#
>> preferences to build on the same node that has build the job before.
>>
>> The excessive use of the harddisk slows all the builds in a senseless
>> way as the bus reaches the capacity limit on spikes which happen if
>> several jobs spawn at the same time and updates their workspace while
>> there still are loads of unused ressources available on other machines
>> -.-
>>
>> I see at the moment just one solution: split the cluster into more
>> slaves with less executors and reassign the jobs.
>> But that is counteracting my idea a bit as this turns from performance
>> improvement, scalability and convenient usability to further
>> performance improvement and alleviated administration.
>>
>> Has someone an idea how to remove this preference of Jenkins and
>> simply let the jobs build where most executors are available?
>>
>> Cheers
>> Jan
>
>


Re: Is it possible to let the jobs roam in a node cluster?

2013-01-22 Thread liam.j.bennett
I think this subject has come up several times. I know I have suffered with 
it for a long time myself. I can see situations where each method would be 
suitable so we should be able to configure it both globally at the 
label-level. So for example nodes with the label "labelA" have roaming mode 
set to "build on last node" and nodes with label "labelB" have roaming mode 
set to "build on any available". I would suggest this for a plugin but it 
is likely to also require core changes.

On Monday, January 21, 2013 8:01:01 PM UTC, Jan Seidel wrote:
>
> Sorry for the late reply.
> I did lose this thread out of sight due to serious workload going on 
> here...
> Mike got it right. The jobs will stay in the build queue and wait for a 
> machine even if 15 executors on other machines are ready to go.
> The approach Mike suggests is not really feasible. There are 200 jobs 
> running on the cluster. It would be insane to set up slaves with 1 or a few 
> executors and restrict the jobs. I would never stop to reconfigure as the 
> setup changes in a very agile manner.
> And Jenkins is overstressed with too many slaves on one machine. I am 
> facing troubles with offline slaves if a machine has 8-10 slaves running.
> The result is that I have to start them manually -.- So more slaves on one 
> machine is not the solution. Rather fewer with more executors to make sure 
> an executor is available at all times.
> But that is IMHO a bit insane and pretty upsetting to see one or two 
> machines drowning in build jobs while the other also could compute for SETI 
> as they ain't utilized.
>
> I am now trying to reconfigure the entire job topology to get as many jobs 
> of a specific type running on one machine with also comes along with 
> restrictions. Still no joy but I may be able to make it a bit more comfy.
>
> Any ideas would still be appreciate though.
>
> Cheers
> Jan
>
> Am Donnerstag, 5. April 2012 11:59:07 UTC+2 schrieb Jan Seidel:
>>
>> Hi there, 
>>
>> my question is already stated in the title as you can see :) 
>>
>> I know that you can let jobs "roam" in a node cluster but can you let 
>> it REALLY ROAM? 
>> Jenkins tries to let jobs build on nodes which already have been used 
>> for building that particular job. 
>> That clutters some build queues while other nodes are picking nose. 
>>
>> The idea was probably to preserve disk space. But I don't need that 
>> intention. "Unimportant jobs" delete their entire workspace upon 
>> finish while the important ones store everything until next run. These 
>> important jobs have a separate harddisk with loads of space. 
>>
>> I have not only several executors running on each sever but also up to 
>> 3 instances of jenkins slaves for better usage of system ressources 
>> and to box very special jobs. Each slave instance is located on its 
>> own harddisk. 
>> That way do the special jobs and the slaves have exclusive access to 
>> ressources and the jobs may roam in their very own realm. 
>> Sounds a bit weird but works perfect except for this '*%&"%§# 
>> preferences to build on the same node that has build the job before. 
>>
>> The excessive use of the harddisk slows all the builds in a senseless 
>> way as the bus reaches the capacity limit on spikes which happen if 
>> several jobs spawn at the same time and updates their workspace while 
>> there still are loads of unused ressources available on other machines 
>> -.- 
>>
>> I see at the moment just one solution: split the cluster into more 
>> slaves with less executors and reassign the jobs. 
>> But that is counteracting my idea a bit as this turns from performance 
>> improvement, scalability and convenient usability to further 
>> performance improvement and alleviated administration. 
>>
>> Has someone an idea how to remove this preference of Jenkins and 
>> simply let the jobs build where most executors are available? 
>>
>> Cheers 
>> Jan
>
>

Re: Is it possible to let the jobs roam in a node cluster?

2013-01-21 Thread Jan Seidel
Sorry for the late reply.
I did lose this thread out of sight due to serious workload going on here...
Mike got it right. The jobs will stay in the build queue and wait for a 
machine even if 15 executors on other machines are ready to go.
The approach Mike suggests is not really feasible. There are 200 jobs 
running on the cluster. It would be insane to set up slaves with 1 or a few 
executors and restrict the jobs. I would never stop to reconfigure as the 
setup changes in a very agile manner.
And Jenkins is overstressed with too many slaves on one machine. I am 
facing troubles with offline slaves if a machine has 8-10 slaves running.
The result is that I have to start them manually -.- So more slaves on one 
machine is not the solution. Rather fewer with more executors to make sure 
an executor is available at all times.
But that is IMHO a bit insane and pretty upsetting to see one or two 
machines drowning in build jobs while the other also could compute for SETI 
as they ain't utilized.

I am now trying to reconfigure the entire job topology to get as many jobs 
of a specific type running on one machine with also comes along with 
restrictions. Still no joy but I may be able to make it a bit more comfy.

Any ideas would still be appreciate though.

Cheers
Jan

Am Donnerstag, 5. April 2012 11:59:07 UTC+2 schrieb Jan Seidel:
>
> Hi there, 
>
> my question is already stated in the title as you can see :) 
>
> I know that you can let jobs "roam" in a node cluster but can you let 
> it REALLY ROAM? 
> Jenkins tries to let jobs build on nodes which already have been used 
> for building that particular job. 
> That clutters some build queues while other nodes are picking nose. 
>
> The idea was probably to preserve disk space. But I don't need that 
> intention. "Unimportant jobs" delete their entire workspace upon 
> finish while the important ones store everything until next run. These 
> important jobs have a separate harddisk with loads of space. 
>
> I have not only several executors running on each sever but also up to 
> 3 instances of jenkins slaves for better usage of system ressources 
> and to box very special jobs. Each slave instance is located on its 
> own harddisk. 
> That way do the special jobs and the slaves have exclusive access to 
> ressources and the jobs may roam in their very own realm. 
> Sounds a bit weird but works perfect except for this '*%&"%§# 
> preferences to build on the same node that has build the job before. 
>
> The excessive use of the harddisk slows all the builds in a senseless 
> way as the bus reaches the capacity limit on spikes which happen if 
> several jobs spawn at the same time and updates their workspace while 
> there still are loads of unused ressources available on other machines 
> -.- 
>
> I see at the moment just one solution: split the cluster into more 
> slaves with less executors and reassign the jobs. 
> But that is counteracting my idea a bit as this turns from performance 
> improvement, scalability and convenient usability to further 
> performance improvement and alleviated administration. 
>
> Has someone an idea how to remove this preference of Jenkins and 
> simply let the jobs build where most executors are available? 
>
> Cheers 
> Jan



Re: Is it possible to let the jobs roam in a node cluster?

2012-04-07 Thread Sami Tikka
2012/4/6 Les Mikesell :
> No, he is saying that after jobs have been run, subsequent runs will
> queue waiting for the node that did the previous build instead of
> migrating to an available node even if it is a long wait.   I think if
> that node is down or disconnected, the job would go to a different one
> immediately, but if it is just busy the job stays in the queue for it
> no matter how deep the queue gets.   In the case of a big source
> checkout with a small update, it might be worth the wait but I don't
> know if there is any way to control the behavior when that's not the
> case.

Odd, I have never seen that behavior. Even for Jenkins instances where
I have multiple executors for slaves, I have never seen builds hang in
the queue if there is a free executor on a slave they can run on.

According to my understanding Jenkins will prefer to run a build on a
slave that already has the sources for the build checked out, but if
such an executor on such a slave is not free, it should not keep the
build in the queue if it can immediately use another slave.

-- Sami


Re: Is it possible to let the jobs roam in a node cluster?

2012-04-06 Thread Les Mikesell
On Fri, Apr 6, 2012 at 2:41 PM, Sami Tikka  wrote:
> So what you are saying is that hard disk is becoming the bottleneck
> for you if there are multiple builds running at the same time even if
> you have free slaves picking their nose?
>
> It sounds to me like you can solve your problem by setting number of
> executors in each slave to 1.
>
> At work I have 20 slaves each with 1 executor and the builds REALLY ROAM :)

No, he is saying that after jobs have been run, subsequent runs will
queue waiting for the node that did the previous build instead of
migrating to an available node even if it is a long wait.   I think if
that node is down or disconnected, the job would go to a different one
immediately, but if it is just busy the job stays in the queue for it
no matter how deep the queue gets.   In the case of a big source
checkout with a small update, it might be worth the wait but I don't
know if there is any way to control the behavior when that's not the
case.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Is it possible to let the jobs roam in a node cluster?

2012-04-06 Thread Sami Tikka
So what you are saying is that hard disk is becoming the bottleneck
for you if there are multiple builds running at the same time even if
you have free slaves picking their nose?

It sounds to me like you can solve your problem by setting number of
executors in each slave to 1.

At work I have 20 slaves each with 1 executor and the builds REALLY ROAM :)

-- Sami

2012/4/5 Jan Seidel :
> Hi there,
>
> my question is already stated in the title as you can see :)
>
> I know that you can let jobs "roam" in a node cluster but can you let
> it REALLY ROAM?
> Jenkins tries to let jobs build on nodes which already have been used
> for building that particular job.
> That clutters some build queues while other nodes are picking nose.
>
> The idea was probably to preserve disk space. But I don't need that
> intention. "Unimportant jobs" delete their entire workspace upon
> finish while the important ones store everything until next run. These
> important jobs have a separate harddisk with loads of space.
>
> I have not only several executors running on each sever but also up to
> 3 instances of jenkins slaves for better usage of system ressources
> and to box very special jobs. Each slave instance is located on its
> own harddisk.
> That way do the special jobs and the slaves have exclusive access to
> ressources and the jobs may roam in their very own realm.
> Sounds a bit weird but works perfect except for this '*%&"%§#
> preferences to build on the same node that has build the job before.
>
> The excessive use of the harddisk slows all the builds in a senseless
> way as the bus reaches the capacity limit on spikes which happen if
> several jobs spawn at the same time and updates their workspace while
> there still are loads of unused ressources available on other machines
> -.-
>
> I see at the moment just one solution: split the cluster into more
> slaves with less executors and reassign the jobs.
> But that is counteracting my idea a bit as this turns from performance
> improvement, scalability and convenient usability to further
> performance improvement and alleviated administration.
>
> Has someone an idea how to remove this preference of Jenkins and
> simply let the jobs build where most executors are available?
>
> Cheers
> Jan


Re: Is it possible to let the jobs roam in a node cluster?

2012-04-05 Thread Les Mikesell
On Thu, Apr 5, 2012 at 4:59 AM, Jan Seidel  wrote:
> Hi there,
>
> my question is already stated in the title as you can see :)
>
> I know that you can let jobs "roam" in a node cluster but can you let
> it REALLY ROAM?
> Jenkins tries to let jobs build on nodes which already have been used
> for building that particular job.
> That clutters some build queues while other nodes are picking nose.
>
> The idea was probably to preserve disk space. But I don't need that
> intention.

Not so much that as to let SCM updates work so you don't need to wait
for a full checkout of large jobs for the next build which will
typically have a small change.

 >"Unimportant jobs" delete their entire workspace upon
> finish while the important ones store everything until next run. These
> important jobs have a separate harddisk with loads of space.
>
> I have not only several executors running on each sever but also up to
> 3 instances of jenkins slaves for better usage of system ressources
> and to box very special jobs. Each slave instance is located on its
> own harddisk.
> That way do the special jobs and the slaves have exclusive access to
> ressources and the jobs may roam in their very own realm.
> Sounds a bit weird but works perfect except for this '*%&"%§#
> preferences to build on the same node that has build the job before.
>
> The excessive use of the harddisk slows all the builds in a senseless
> way as the bus reaches the capacity limit on spikes which happen if
> several jobs spawn at the same time and updates their workspace while
> there still are loads of unused ressources available on other machines
> -.-
>
> I see at the moment just one solution: split the cluster into more
> slaves with less executors and reassign the jobs.
> But that is counteracting my idea a bit as this turns from performance
> improvement, scalability and convenient usability to further
> performance improvement and alleviated administration.
>
> Has someone an idea how to remove this preference of Jenkins and
> simply let the jobs build where most executors are available?

Normally the initial distribution would be more random in the first
place - perhaps you started with one node and built all the jobs
before adding others.  All of our jobs are tied to labels, so if we
remove the labels from a node, all of its jobs will do their next
build on one of the others.   You could force jobs to move with a
'restrict to' a different node, or force some separation by using
several labels just to make them spread out, but I don''t know of a
more dynamic approach.   I think what you need is a way to specify in
the job that jenkins should clean up and forget where it ran last for
the jobs where you want that behavior.

-- 
   Les Mikesell
lesmikes...@gmail.com


Is it possible to let the jobs roam in a node cluster?

2012-04-05 Thread Jan Seidel
Hi there,

my question is already stated in the title as you can see :)

I know that you can let jobs "roam" in a node cluster but can you let
it REALLY ROAM?
Jenkins tries to let jobs build on nodes which already have been used
for building that particular job.
That clutters some build queues while other nodes are picking nose.

The idea was probably to preserve disk space. But I don't need that
intention. "Unimportant jobs" delete their entire workspace upon
finish while the important ones store everything until next run. These
important jobs have a separate harddisk with loads of space.

I have not only several executors running on each sever but also up to
3 instances of jenkins slaves for better usage of system ressources
and to box very special jobs. Each slave instance is located on its
own harddisk.
That way do the special jobs and the slaves have exclusive access to
ressources and the jobs may roam in their very own realm.
Sounds a bit weird but works perfect except for this '*%&"%§#
preferences to build on the same node that has build the job before.

The excessive use of the harddisk slows all the builds in a senseless
way as the bus reaches the capacity limit on spikes which happen if
several jobs spawn at the same time and updates their workspace while
there still are loads of unused ressources available on other machines
-.-

I see at the moment just one solution: split the cluster into more
slaves with less executors and reassign the jobs.
But that is counteracting my idea a bit as this turns from performance
improvement, scalability and convenient usability to further
performance improvement and alleviated administration.

Has someone an idea how to remove this preference of Jenkins and
simply let the jobs build where most executors are available?

Cheers
Jan