I'm not just thinking of using the source code control system to do the cleaning here.

I'm coming to jenkins after developing my own automated builder based on btrfs snapshots. With btrfs snapshots, it's possible to both keep an old build, (all artifacts intact), lying around while /also/ creating a fresh incremental build on top of it. The two builds then share files in the file system so much as they are common and the file system takes care of which ones stay around.

With this approach, you can also automate recovery from failures. When an incremental fails, you simply start the next incremental from the last successful snapshot. This also allows for "pre-commit" checking in the sense that changes can be built, (as an incremental on the previous build), and if successful, the changes can be committed and the work space becomes the next successful snapshot. If it fails, the next build starts from the previously successful snapshot.

This doesn't have the static disk allocation of jenkins, (one job, maximum one workspace), but you can use the entire disk as a big ring buffer - remove the oldest builds until there's enough space for the next build, then build.

This is orders of magnitude faster than using the source code control system to do the cleaning. It's also significantly faster in all work flows and doesn't require any source code control system hooks at all. It's completely source code control system agnostic.

If you put btrfs on root, it also protects your build host by effectively running the build in a disk jail. Builds can muck about with the state of the build host and each build will still be completely independent with a zero time setup cost.

--rich

Reply via email to