On Wed, Aug 27, 2014 at 5:53 PM, Harald Sitter <apachelog...@ubuntu.com>
wrote:

> hola,
>
> (another tldr mail \o/)
>
> the last couple of days I have been looking into jenkins while staging
> a first proof of concept using the existing neon5 tech. I'd like to
> see if anyone has thoughts on whether or not we should use jenkins.
>
> I personally am yet undecided because as a matter of fact we have
> pretty much all the tooling jenkins would provide floating around in
> standalone tools (the various status scripts, all the neon stuff doing
> automatic build orchestration, retry management, and whatnot).
>
> # what's jenkins?
> jenkins is a CI orchestration system with webui used for most CI
> setups (kde uses it, canonical uses various setups for different CI
> concepts). it schedules and manages builds and tracks their present as
> well as over-time status.
>
> # why jenkins?
> for our purposes jenkins would be a glorified schedule manager and
> status dashboard. effectively there would be very little difference
> between it and something homemade as what it does is not exactly
> rocket science. it does however have a thriving user base, a nice web
> status dashboard, logging, status tracking etc. since it is used by a
> lot of other projects there certainly wouldn't be harm in using the
> same thing in particular since in the distant future this could also
> allow for resource and experience sharing and whatnot.'
>
> a general jenkins job would do the following:
> - poll whatever SCM we use for packaging and *automatically* builds
> when a change arrives
> - at least once a day it triggers by time and fetches a new tarball (I
> am yet unsure how exactly that would work but oh well..)
> - the job updates packaging, the relevant upstream clone and merges
> the two into a debian source packages
> - hurls the source off to launchpad
> - polls launchpad for status
> - fails depending on whether BOTH i386 and amd64 built successfully
> (for now I'd not do arm builds because we have no actual production
> quality products for arm hardware)
> - fetches build log
> - extracts data from log (cmake deps met, linitian clean, symbol
> fails, install fails... pretty much what ppa status does currently)
> - fails if a thing we require did not work out (e.g. missing optional
> cmake dep)
>
> on top of that we *could* have jobs reflect the actual dependencies of
> a build either through jenkins itself (which would formally block a
> job from building until its deps are built) or less formally as part
> of the actual build where the build would be waiting in progress until
> deps are build (and then fail depending on that).
>
> # why not jenkins?
> as mentioned jenkins would be doing what we already have, so from an
> effort POV it probably makes little difference whether we use jenkins
> or write a similar orchestration system from scratch because most of
> the heavy lifting logic is already present in various other scripts
> and tools and only needed to get refactored to allow for more atomic
> usage.
>
> I have not looked particularly into the expandability of jenkins, but
> it being java I am not too keen on writing own jenkins plugins :P
>
> to ensure jenkins is as dynamic as possible we will need to write
> additional tooling that manages jenkins jobs. namely we'd at the very
> least need a script that gets a list of all packages we want built and
> then automatically creates/updates/deletes jenkins jobs accordingly. a
> brief look at the REST api suggests that we can automate this
> entirely. nevertheless it is a bit of software that will need writing
> and probably wouldn't be needed (or at least not in such a formal
> manner) if we wrote our own orchestration.
>
> # random notes
> jenkins apparently has no concept of job grouping (or directorying)
> such that jenkins job names will have to have fun ball names like
> utopic_unstable_plasma-workspace. on the dashboard one can still have
> dedicated views showing all utopic builds, or all unstable builds.
>
> the builds themself can be done by any old script, jenkins for the
> most part would just be the trigger for the script. the script itself
> would probably do a schroot overlayfs (or lxc overlayfs) and do most
> of the package business internally.
>
>
> Thoughts


It's what we use in KDE, it seems like a good idea to use it further.

Actually, maybe it could be interesting if you could just host the machines
doing the tasks but extend the build.kde.org instance so the information
could be shown together with each project.

Or maybe it doesn't make sense, just a thought.

Good luck!
Aleix
-- 
kubuntu-devel mailing list
kubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

Reply via email to