Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-13 Thread vf
To #7, slaves do not need the master config files at all, they only need the 
libraries used by the jobs.
Our master has no executors, all jobs are built on slaves. Master and slaves 
have different config. Slaves running with user jenkins-slave (master jenkins), 
only the tools/libs are sync-ed, nothing else need to be preserved. So we do 
have a slave running on the master host, but that's complete transparent for 
the master.



zperry zack.pe...@sbcglobal.net schrieb:

Hi Les,

Finally, the first sentence in *Distributed
builds*https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds

wiki and the list from Kohsuke
Kawaguchihttps://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-Example%3AConfigurationonUnixstarted
making sense to me.  Thank you once more for sharing your 
experience.

Regarding the list (*shown below, and somewhat shortened and edited by
me, 
e.g. s#/var/jenkins#/var/lib/jenkins#*), could you please comment on 

 - #1 (the bold faced part). Is it really necessary?  For example, the 
uid/gid usage in Fedora differ quite much from that of Ubuntu.  I doubt

even NIS would make it simpler.  My approach is to review the
/etc/passwdand 
/etc/group of all target platforms, pick a pair of uid/gid that are not

used by all, and assign the pair for Jenkins' use (on both the master
and 
   slaves).  The packages provided on Jenkins pkg site (e.g 
http://pkg.jenkins-ci.org/redhat/) don't make this task transparent, so
the 
   above step seems to be necessary.
- #7.  Do you think it's a nearly optimal way to do as part of setting 
   up a slave?
   - #8. IMHO this is optional. What would you recommend?


1. Each computer has an user called jenkins and a group called jenkins.
*All 
 computers use the same UID and GID*. (If you have access to NIS, this 
   can be done more easily.) This is not a Jenkins requirement, but it 
   makes the slave management easier. 
   2. On each computer, /var/lib/jenkins directory is set as the home 
 directory of user jenkins. Again, this is not a hard requirement, but 
   having the same directory layout makes things easier to maintain. 
   3. All machines run sshd. Windows slaves run cygwin sshd. 
  4. All machines have ntpdate client installed, and synchronize clock 
   regularly with the same NTP server. 
   5. Master's /var/lib/jenkins have all the build tools beneath it. 
   6. Master's /var/lib/jenkins/.ssh has private/public key and 
authorized_keys so that a master can execute programs on slaves through

   ssh, by using public key authentication. 
   7. *On master, I have a little shell script that uses rsync to 
   synchronize master's /var/lib/jenkins to slaves (except 
/var/lib/jenkins/workspace) I use this to replicate tools on all
slaves. 
   *
8. (*optional*) /var/lib/jenkins/bin/launch-slave is a shell script
that 
   Jenkins uses to execute jobs remotely. 
9. Finally all computers have other standard build tools like svn and
cvsinstalled and available in 
   PATH. 

Regards,

-- Zack

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-03 Thread Les Mikesell
On Tue, Oct 2, 2012 at 11:16 AM, zperry zack.pe...@sbcglobal.net wrote:

 I don't really see how that practice relates to a web service intended
 for both remote and local use from multiple users. The remote api does
 the same things as the regular http interface.It could work, of
 course, but it's not what people expect from a network service.   If
 you are going to only run commands locally from the jenkins master you
 might as well use the cli or groovy instead of the remote api.


 Lets say the master CI provides Web access (UI and RESTful) only to the
 localhost of the node.  At the first look, this is extremely inconvenient.
 But, in almost all cases, the node runs sshd anyways, thus it can be
 accessed via ssh.

 With ssh, you can use port forwarding ssh -L
 local_port:remote_ip:remote_port remote_hostname to forward a desired local
 port, e.g. 8080 to the localhost of the master.

I understand the concept and use it for ad-hoc things myself, but
don't normally give all the people who would use jenkins a system
login to the server.

 No more insecure Basic Authentication to worry.  If you have setup PKI, the
 access is transparent and secure.

Our lab is firewalled and remote access already secured, but if I were
concerned about this, I'd probably use https - unless there were
already a central authentication setup that would generate restricted
ssh logins on access.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-03 Thread Les Mikesell
On Tue, Oct 2, 2012 at 12:25 PM, zperry zack.pe...@sbcglobal.net wrote:
 Hi Les,

 Finally, the first sentence in Distributed builds wiki and the list from
 Kohsuke Kawaguchi started making sense to me.  Thank you once more for
 sharing your experience.

 Regarding the list (shown below, and somewhat shortened and edited by me,
 e.g. s#/var/jenkins#/var/lib/jenkins#), could you please comment on

 #1 (the bold faced part). Is it really necessary?  For example, the uid/gid
 usage in Fedora differ quite much from that of Ubuntu.  I doubt even NIS
 would make it simpler.  My approach is to review the /etc/passwd and
 /etc/group of all target platforms, pick a pair of uid/gid that are not used
 by all, and assign the pair for Jenkins' use (on both the master and
 slaves).  The packages provided on Jenkins pkg site (e.g
 http://pkg.jenkins-ci.org/redhat/) don't make this task transparent, so the
 above step seems to be necessary.
 #7.  Do you think it's a nearly optimal way to do as part of setting up a
 slave?
 #8. IMHO this is optional. What would you recommend?

 1.  Each computer has an user called jenkins and a group called jenkins. All
 computers use the same UID and GID. (If you have access to NIS, this can be
 done more easily.) This is not a Jenkins requirement, but it makes the slave
 management easier.

The main (maybe only) reason this would matter is if you have common
NFS mounts across the slaves where uid matters.  I have a read-only
export of common libs/tools shared via nfs and samba but it is all
public.   For things I want back from the build, I let jenkins archive
the build artifacts specified for the job.

 2. On each computer, /var/lib/jenkins directory is set as the home directory 
 of
 user jenkins. Again, this is not a hard requirement, but having the same
 directory layout makes things easier to maintain.

Doesn't really matter, especially if you don't install the jenkins
package and create the user yourself.  You do need to pay attention in
partition layout to have space for jenkin's workspace.

 3 . All machines run sshd. Windows slaves run cygwin sshd.

Yes, for linux - and probably OK for windows.   My windows slaves are
VM's cloned after installing a bunch of tools including setting up the
jenkins slave as service.  In that scenario you just have to edit the
slave's own name in the jenkins-slave.xml file to match the
newly-added node in jenkins to come up working.

 4. All machines have ntpdate client installed, and synchronize clock regularly
 with the same NTP server.

Yes, this is pretty much essential unless you are running VMs with a
good sync with their host clock.

 5. Master's /var/lib/jenkins have all the build tools beneath it.

I don't build anything on the master.   I do export the NFS/samba
share to the slaves from the jenkins server but that's just a matter
of convenience.

 6. Master's /var/lib/jenkins/.ssh has private/public key and authorized_keys 
 so
 that a master can execute programs on slaves through ssh, by using public
 key authentication.

Yes, but normally the only thing it needs to do directly is copy over
the slave.jar and execute it.  The rest is done internally with java
remoting magic.

 7. On master, I have a little shell script that uses rsync to synchronize
 master's /var/lib/jenkins to slaves (except /var/lib/jenkins/workspace) I
 use this to replicate tools on all slaves.

I avoid that with the common nfs/samba export.  Copying just gives a
different tradeoff for setup time/disk usage/speed.

 8. (optional) /var/lib/jenkins/bin/launch-slave is a shell script that Jenkins
 uses to execute jobs remotely.

Jenkins should take care of that by itself with the 'Jenkins SSH slaves plugin'.

 9. Finally all computers have other standard build tools like svn and cvs
 installed and available in PATH.

Yes, it is painful to try to fight with the rpm/deb package managers
in that respect.  In cases where you want to build with non-standard
versions of libs or tools, you can supply paths in the jenkins job,
but you might end up needing either a chroot environment or a whole
different slave to avoid conflicts.  For example, on windows it is
easy enough to have multiple versions of the boost libs available  and
specify which you want in your job, but I haven't come up with a good
way to do that on linux where there is an expected/packaged version
and changing it conflicts with other packages.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-02 Thread Les Mikesell
On Mon, Oct 1, 2012 at 10:27 PM, zperry zack.pe...@sbcglobal.net wrote:

 Reviewed the above, and saw this  When your Jenkins is secured, you can use
 HTTP BASIC authentication to authenticate remote API requests. See
 Authenticating scripted clients for more details.

 IMHO this is a piece of ill-advice.  With our POC setup, and many other
 nodes running HTTP services, we simply use ssh forwarding for access using
 the localhost of the machine we are on.  That is, most of our HTTP services
 are configured to offer HTTP services to their respective localhost only.
 This should be a common practice everywhere IMHO.

I don't really see how that practice relates to a web service intended
for both remote and local use from multiple users. The remote api does
the same things as the regular http interface.It could work, of
course, but it's not what people expect from a network service.   If
you are going to only run commands locally from the jenkins master you
might as well use the cli or groovy instead of the remote api.

-- 
  Les Mikesell
 lesmikes...@gmail.com


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-02 Thread zperry
Hi Les,

[...]

 I don't really see how that practice relates to a web service intended 
 for both remote and local use from multiple users. The remote api does 
 the same things as the regular http interface.It could work, of 
 course, but it's not what people expect from a network service.   If 
 you are going to only run commands locally from the jenkins master you 
 might as well use the cli or groovy instead of the remote api. 


Lets say the master CI provides Web access (UI and RESTful) only to the 
localhost of the node.  At the first look, this is extremely inconvenient. 
But, in almost all cases, the node runs sshd anyways, thus it can be 
accessed via ssh.

With ssh, you can use port forwarding *ssh -L 
local_port:remote_ip:remote_port remote_hostname* to forward a desired 
local port, e.g. 8080 to the localhost of the master.  

Then, on any machine that can connect to the master node via ssh, either 
single hop, or via jump host (aka multiple hops, possibly through a 
firewall from another network), you can just use a browser or issue 
requests to the localhost:8080 of the node* you are physically using*.

No more insecure Basic Authentication to worry.  If you have setup PKI, the 
access is transparent and secure.


 -- 
   Les Mikesell 
  lesmi...@gmail.com javascript: 


Regards,

-- Zack 


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-02 Thread zperry
Hi Les,

Finally, the first sentence in *Distributed 
builds*https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds 
wiki and the list from Kohsuke 
Kawaguchihttps://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-Example%3AConfigurationonUnixstarted
 making sense to me.  Thank you once more for sharing your 
experience.

Regarding the list (*shown below, and somewhat shortened and edited by me, 
e.g. s#/var/jenkins#/var/lib/jenkins#*), could you please comment on 

   - #1 (the bold faced part). Is it really necessary?  For example, the 
   uid/gid usage in Fedora differ quite much from that of Ubuntu.  I doubt 
   even NIS would make it simpler.  My approach is to review the /etc/passwdand 
   /etc/group of all target platforms, pick a pair of uid/gid that are not 
   used by all, and assign the pair for Jenkins' use (on both the master and 
   slaves).  The packages provided on Jenkins pkg site (e.g 
   http://pkg.jenkins-ci.org/redhat/) don't make this task transparent, so the 
   above step seems to be necessary.
   - #7.  Do you think it's a nearly optimal way to do as part of setting 
   up a slave?
   - #8. IMHO this is optional. What would you recommend?


   1. Each computer has an user called jenkins and a group called jenkins. *All 
   computers use the same UID and GID*. (If you have access to NIS, this 
   can be done more easily.) This is not a Jenkins requirement, but it 
   makes the slave management easier. 
   2. On each computer, /var/lib/jenkins directory is set as the home 
   directory of user jenkins. Again, this is not a hard requirement, but 
   having the same directory layout makes things easier to maintain. 
   3. All machines run sshd. Windows slaves run cygwin sshd. 
   4. All machines have ntpdate client installed, and synchronize clock 
   regularly with the same NTP server. 
   5. Master's /var/lib/jenkins have all the build tools beneath it. 
   6. Master's /var/lib/jenkins/.ssh has private/public key and 
   authorized_keys so that a master can execute programs on slaves through 
   ssh, by using public key authentication. 
   7. *On master, I have a little shell script that uses rsync to 
   synchronize master's /var/lib/jenkins to slaves (except 
   /var/lib/jenkins/workspace) I use this to replicate tools on all slaves. 
   *
   8. (*optional*) /var/lib/jenkins/bin/launch-slave is a shell script that 
   Jenkins uses to execute jobs remotely. 
   9. Finally all computers have other standard build tools like svn and 
cvsinstalled and available in 
   PATH. 

Regards,

-- Zack


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-01 Thread Les Mikesell
On Sun, Sep 30, 2012 at 6:44 PM, zperry zack.pe...@sbcglobal.net wrote:

 Our environment is a strictly Linux environment (CentOS, Scientific Linux,
 Fedora, and Ubuntu).  Both of us are competent in setting up large clusters.
 We typically use tools like cobbler + a configuration management tool
 (Puppet, Chef, and alike) to set up a large number of machines (physical
 and/or virtual) hands off (hundreds of nodes in less than an hour typical).
 We would like to do the same for nodes running Jenkins. But the step by step
 guide doesn't give us any clues in this regard.

It is probably pretty rare to need more than one or a few masters, and
if you do it would be because they had to be configured differently.

 Personally I don't mind configure the master via the Web UI (although I
 would love to automate this part if I have the necessary info). But being
 used to dealing with hundreds or more machines hands-off, clicking the UI
 for many machines just doesn't feel right.  Can someone point to us a
 documentation that talks about how to set up large cluster of Jenkins CI
 hosts more in the hands-off way?

I've never needed enough slaves that it was a problem to click the
'copy existing node' button and click 'OK', but maybe there is a way
to do it through the rest interface.  For linux slaves started via ssh
you don't really need anything for jenkins other than the jvm and a
suitable user account.  The larger problem is making sure the slaves
have all of the (very arbitrary) build tools available that some
user's job might invoke.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-01 Thread zperry
Les, 

Thanks for your comments.


 It is probably pretty rare to need more than one or a few masters, and 
 if you do it would be because they had to be configured differently. 


In the near term, we only need one master.  
 


 I've never needed enough slaves that it was a problem to click the 
 'copy existing node' button and click 'OK', *but maybe there is a way 
 to do it through the rest interface.  *


I have not found a reference in this regard, and would appreciate a 
pointer.  I will do the digging.
 

 For linux slaves started via ssh you don't really need anything for 
 jenkins other than the jvm and a 
 suitable user account.  


All these have been done.  Right now, for POC, I have setup three KVM 
guests (all done via kickstart hands-off):

   - Ubuntu 12.04 64bit
   - Ubuntu 11.04 64bit
   - Fedora 17 64bit

All have openjdk, Jenkins packages installed, all can ssh to each other as 
the 'jenkins' user.

The main use is to check our C++ builds, together with running tests 
written in bash and Python.  The master and the two slaves are to run the 
same upstream/downstream jobs. 
 

 The larger problem is making sure the slaves have all of the (very 
 arbitrary) build tools available that some 
 user's job might invoke. 


Not a problem for us.  All GNU tools and others (e.g. md5deep, tree) are 
installed during the server configuration phase.  Done already. 

All we need is just enough info so that we can tie these CI hosts 
together programmatically. Results from mouse clicks are not easily 
reproducible, but Results from running scripts + templates are, and can be 
even idempotent. That's what we would like to accomplish.


 -- 
Les Mikesell 
   lesmi...@gmail.com javascript: 


Regards,

-- Zack 


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-01 Thread Les Mikesell
On Mon, Oct 1, 2012 at 4:22 PM, zperry zack.pe...@sbcglobal.net wrote:

 I've never needed enough slaves that it was a problem to click the
 'copy existing node' button and click 'OK', but maybe there is a way
 to do it through the rest interface.


 I have not found a reference in this regard, and would appreciate a pointer.
 I will do the digging.

This would be a starting point - there is also a cli and a way to use
groovy to access the whole api.
https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API

 For linux slaves started via ssh you don't really need anything for
 jenkins other than the jvm and a
 suitable user account.


 All these have been done.  Right now, for POC, I have setup three KVM guests
 (all done via kickstart hands-off):

 Ubuntu 12.04 64bit
 Ubuntu 11.04 64bit
 Fedora 17 64bit

 All have openjdk, Jenkins packages installed, all can ssh to each other as
 the 'jenkins' user.

You shouldn't need the jenkins package installed on the slaves, and
they don't need to ssh to each other.  Just a user with ssh keys set
up so the master can execute commands and it will copy the slave jar
over itself.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-01 Thread zperry
Hi Les,

Thanks for your follow-up.


  I have not found a reference in this regard, and would appreciate a 
 pointer. 
  I will do the digging. 

 This would be a starting point - there is also a cli and a way to use 
 groovy to access the whole api. 
 https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API 


I also did more digging and found this thread to be 
useful: 
http://serverfault.com/questions/309848/how-can-i-check-the-build-status-of-a-jenkins-build-from-the-command-line
 

 [...]
  
  All have openjdk, Jenkins packages installed, all can ssh to each other 
 as 
  the 'jenkins' user. 

 You shouldn't need the jenkins package installed on the slaves, and 
 they don't need to ssh to each other.  Just a user with ssh keys set 
 up so the master can execute commands and it will copy the slave jar 
 over itself. 


Due to the lack of clear documentation for hands-off type of cluster setup, 
so far, I have based on what have been done mostly the following:

   1. 
   
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-OtherRequirements
   2. 
   
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-Example%3AConfigurationonUnix
   
Up to now, I have been using the list in 2. as my check list.  This is why 
I install Jenkins on all POC CI nodes so far:

*On master, I have a little shell script that uses rsync to synchronize 
master's /var/jenkins to slaves (except /var/jenkins/workspace) I use this 
to replicate tools on all slaves.*

Thanks for letting me know that installing Jenkins on slaves is 
unnecessary.  Let me make my system more KISS :-)


 -- 
   Les Mikesell 
 lesmi...@gmail.com javascript: 


-- Zack 


Re: Approach to build a cluster of Jenkins CI servers hands-off and tie them together?

2012-10-01 Thread zperry


 This would be a starting point - there is also a cli and a way to use 
 groovy to access the whole api. 
 https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API 


Reviewed the above, and saw this  *When your Jenkins is secured, you can 
use HTTP BASIC authentication to authenticate remote API requests. See 
Authenticating 
scripted 
clientshttps://wiki.jenkins-ci.org/display/JENKINS/Authenticating+scripted+clients
 **for more details*.

IMHO this is a piece of ill-advice.  With our POC setup, and many other 
nodes running HTTP services, we simply use ssh forwarding for access using 
the localhost of the machine we are on.  That is, most of our HTTP services 
are configured to offer HTTP services to their respective localhost only. 
 This should be a common practice everywhere IMHO.

-- Zack