The trick is to not run the jenkins slave on the VM. If you run the jenkins 
slave on the VM, then when it reverts the TCP connection between master and 
slave dies, and jenkins get's very unhappy.

What we do is for each VM we run the slave on a "vm controller". The 
controller then does something like:

start-vm.sh qavmNN BUILD_ID
ssh qavmNN script.sh

start-vm.sh does a bunch of stuff, but the main thing is reverting the VM 
to a known good state. We do the revert at the start of the job rather than 
the end. That way, if something goes wrong we can ssh into the VM to 
diagnose it. start-vm.sh, for us, is actually a bit involved. A complete 
test can take some time, so we chop it into lots of bits. Some other 
jenkins job built the executables for BUILD_ID and put them into an 
artifact repo. Copying from the artifact repo can take a while, so after we 
download them we create a working snapshot. So what start-vm.sh does is 
look for the working snapshot for BUILD_ID. If it is missing start-vm.sh 
copies it over and creates the working snapshot. If it isn't missing then 
it just reverts to the working snapshot.

Hope that helps


On Monday, December 10, 2012 9:28:08 AM UTC-8, Andrew Melo wrote:
>
> Hey all, 
>
> I seem to remember mails about this, but I can't seem to gmail-foo the 
> right words to get the conversation back up. 
>
> Is there a good way to have a slave restart/load a checkpoint after a 
> job is executed? I'd like to run some tests that involve provisioning 
> a blank machine, so I'm looking for a way to get that connected up in 
> Jenkins. 
>
> Thanks, 
> Andrew 
>
> -- 
> -- 
> Andrew Melo 
>

Reply via email to