I'm trying to work out how to best set up GoCD to build a series of 
libraries.

At present, I use Jenkins to do this, and a single pipeline receives a 
triggering git webhook (from any one of ~50 repositories) and then uses a 
single repository containing builder-specific code to essentially allow 
what GoCD uses Configuration Repositories for (as I understand from this 
group and the docs) to then clone and build the triggering repo's code.

The code is built for 6 different target/release/debug combinations (3 
targets, all with debug+release). On Jenkins, my current system carries out 
these builds in series, making the time 6x longer than it might otherwise 
be (excluding consideration of e.g. setup and publication after build).

The bulk of my question relates to Stage 2 in my explanation below.

My current thought for GoCD looks something like:

   1. Stage 1 (setup, static testing)
      1. Job 1 (only job)
         1. Checkout the git material that triggers the build
         2. Parse a file giving dependencies
         3. Install any necessary dependencies using one of the 6 targets 
         (probably 32-bit Windows, debug)
         4. Run static analysis (this can take some non-negligible time, 
         due to the tools :/ )
         5. Run tests on the source code
      2. *Stage 2 (Build 6 target combinations in parallel)*
      1. Jobs 1-6 
      (using 
https://docs.gocd.org/current/advanced_usage/admin_spawn_multiple_jobs.html 
      ? ) Here I want to have each job use a different Docker container, from 
one 
      of 3 images (per target, rel+debug can use the same image). Questions 
below 
      the rest of job config explanation
         1. Checkout git code (or copy from host?)
         2. Install target-specific dependencies
         3. Compile library
         4. Run tests on the compiled code
         5. Build a package for installation
         6. Publish back to host via artifact?
      3. Stage 3 (post-build)
      1. Publish to Github releases
      2. Publish to the feed from which packages are installed in 1.1.3 and 
      2.{1-6}.2
   
To run the jobs in parallel, it seems I need multiple agents - but that I 
can install multiple agents on a single host 
(https://docs.gocd.org/current/advanced_usage/admin_install_multiple_agents.html).
 
Alternatively, I looked at elastic agents, but this seems like it might be 
more complex than required? If I have an image that I pass various 
configuration variables into, and it produces the compiled library after 
testing, what's the best way to integrate this into GoCD?
There will only be the 6 agents (maybe 7 in future?) required for build, 
and I don't want to scale beyond a single host computer. I think running 
them all together is fine (they won't use multiple cores effectively, and I 
can devote a computer specifically to this build process).

Do I need a separate agent for the stage 1/3 parts? On Jenkins I'm running 
all of the pipeline on the master node, which isn't the 'best-practice' 
method but was easier to setup.

Should I instead use only a single agent and manually write script code to 
start 6 containers in parallel, and then gather them? (I'd prefer not to, 
this seems like it will avoid some of the nice visualization and scheduling 
being done by GoCD.)

To run the Docker containers, do I (based on the above description) need to 
use the Docker-out-of-Docker workflow 
here: https://docs.gocd.org/current/gocd_on_kubernetes/docker_workflows.html 
?

Any guidance on what I should do/read/attempt would be appreciated.
Thanks!
Christian

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/6148aa9d-e368-445d-8f81-f2a180c6fcd4n%40googlegroups.com.

Reply via email to