On Thu, Sep 25, 2014 at 2:41 PM, Michael Raskin <7c6f4...@mail.ru> wrote:
> >It sounds like a necessary evil. > > > >Another option would be to make Hydra super fast... What has been explored > >to optimize compile speeds? Using distcc, ccache, SSD, elastic scaling? > > > >What if we had a security build fund that we could use to briefly run 500 > >machines to complete security builds? Would that allow 2-hour security > >rollouts? > > I bet against our package set being buildable in 2 hours — because of > time-critical path likely hitting some non-parallelizable package. > I think most large projects can be compiled via distcc, which means that all you need is parallel make. Libreoffice build is inherently a single-machine task, so to speed it > up you need something like two octocore CPUs in the box. > Point in case: https://wiki.documentfoundation.org/Development/BuildingOnLinux#distcc_.2F_Icecream . Building with "icecream" defaults to 10 parallel builds. Also, with ccache the original build time of 1.5 hours (no java/epm) is reduced to 10 minutes on subsequent runs. > With such a goal, we would need to recheck all the dependency paths and > optimise the bottlenecks. > Sounds good :) Another option is to have a jobset which builds only a "server" subset of NixPkgs, and which has higher priority than the trunk builds. I don't know of many servers that need libreoffice installed... You can have multiple binary buildsets right? Maybe making dependency replacement work reliably (symlinking into > a special directory and referring to this directory?) is more feasible… > Can you elaborate? Wout.
_______________________________________________ nix-dev mailing list nix-dev@lists.science.uu.nl http://lists.science.uu.nl/mailman/listinfo/nix-dev