I think the two of us are getting out of sync ;)

   - We use 4 machines: 1 runs the master and 2 slaves and all other 
   machines have 3 slaves with up to 3 executors running.
   - The jobs workspaces are not reaching into each others. I have tried 
   the sync issue (been there, didn't like it, went away)
   - *What I meant in the last post:* The problem is rather on OS level as 
   some of the programs and tools don't like each other. So installing both 
   categories may lead to crashes or other less obvious problems. That's why 
   the 2 categories of jobs have been separated physically on cost of 
   performance.
   - Our machines for building the final software run on SSD but that won't 
   work for GUIDE as the array would have to be big and thereby expensive. And 
   SSD is unfortunately not as solid as expected. I did wreck the first SSD 
   within 6 weeks due to the massive disk activities caused by the continous 
   integration :)


Looks like I have to consider that my next big project would be to review 
our toolbox to get things less sophisticated and more compatible. It's time 
to clean up the mess that did grow over the years before I came to 
business. 
I think I will use your suggestion at time being and setup single 
1-executor nodes with exclusive job assignments until I have finish my 
migrations and then return to the issue when I have less workload. This is 
the easiest and quickest solution.

Thanks :)
Jan



Am Dienstag, 3. Juli 2012 12:16:30 UTC+2 schrieb sti:
>
> A slave does not have to run in separate hardware. You can run one or more 
> slaves in the same server where Jenkins master is running. The slaves just 
> need to have a unique name and working directory. 
>
> By the way, if you have jobs reaching into other jobs' workspaces, you 
> might run into difficulties when you try to parallelize the build pipeline. 
> Jenkins does not make any provisions for synchronizing access to a job 
> workspace, so if that's what you are doing, you have to take care of it 
> yourself. 
>
> The Jenkins way is to archive the build artifacts (which stores them into 
> a special area under JENKINS_HOME, outside of any job workspace) and then 
> use Copy Artifacts build step to copy the needed artifacts for a job that 
> needs them. Of course, if your build artifacts are in the gigabytes range, 
> copying them around will take some time. Fast SSD might help, or you might 
> choose to dip into the workspaces directly, but be aware of the 
> synchronization issue. 
>
> -- Sami 
>
> Jan Seidel kirjoitti 3.7.2012 kello 11.37: 
>
> > Ah! now I'm getting it :) 
> > yesnomaybe... The idea is not bad but I have to investigate if it is 
> feasible. 
> > 
> >         • The workspaces for these particular jobs are not small and 
> have to share harddisk space with other jobs. Currently does one machine 
> build all these special jobs as it for this purpose has several TB of disk 
> space. This could require some hardware reconfiguration. No fun but 
> possible. 
> >         • Worse is that I would have to adapt all boxes to meet the 
> requirements for these jobs. The environment is quite special and tricky. 
> It is already a hassle set it newly up on that single machine but to 
> recreate on productive machines which already are set up needs some special 
> treatment to ensure that nothing is interferring. 
> >         • The machines provide per box 3 nodes with up to 3 executors 
> per node so that would not be a problem to make a split for the exclusive 
> jobs. 
> > I have to analyze it but this is probably the smartest approach if no 
> convenient plugin is available. 
> > 
> > Cheers 
> > Jan 
> > 
> > Am Donnerstag, 28. Juni 2012 12:41:04 UTC+2 schrieb Jan Seidel: 
> > Hi folks, 
> > 
> > I am trying to parallelize some of our builds to speed things up. 
> > This particular build is quite special as it also interacts with 
> databases. Multiple write access on a database will wreck the content, so 
> this must be avoided by all means. It takes us in worst cast out of 
> business for 2 weeks and creates loads of work and stress. 
> > 
> > Don't ask me why the DB design is as it is, that's pretty complicated, 
> insane, sooo wrong and not worth discussing in order to fix that issue and 
> let the build jobs just do what they are meant to do. 
> > Have been there, didn't like it, went away! 
> > 
> > The initial build job is a dispatcher that decides which job to run. So 
> far quite easy but it also checks for 2 conditions which requires to almost 
> identical jobs asides of one of the repositories location. 
> > So the jobs access mostly the same ressources including the database 
> which must not be simultaneously! So they are a kind of "siblings" 
> > The default is set that inhibits to spawn duplicate jobs but with this 
> second condition is it changing a bit. The second condition requires to 
> block a job if a duplicate or the sibling is running 
> > 
> > This brings me to a dumb situation. Either I find a way to: 
> >         • refer to related AND non-related jobs which may block a build 
> >         • block on top-level (the dispatcher). This works but scraps all 
> my efforts to get jobs not sharing same ressources to run in parallel 
> >         • do not block jobs and hope that the developers don't botch it 
> and wreck a database *yuk* 
> > Do you know a solution to block a job if a clone or a sibling already is 
> running? 
> > 
> > 
> > 
> > 
> > 
> > Cheers 
> > 
> > Jan 
> > 
>
>

Reply via email to