On Tue, 25 Sep 2018 22:08:14 +0100 Bertrand Jacquin <bertr...@jacquin.bzh> said:

> On Mon, Sep 24, 2018 at 11:54:16AM +0100, Carsten Haitzler wrote:
> > On Sat, 22 Sep 2018 15:57:27 +0100 Bertrand Jacquin <bertr...@jacquin.bzh>
> > said:
> > 
> > > > > This is something I do not agree with. I have been kicking into pants
> > > > > for problems with the infra for _years_ when doing Jenkins. It has
> > > > > changed nothing and I moved over to cloud services to get the control
> > > > > and flexibility I needed.
> > > > 
> > > > This is a result of policy from Beber of giving pretty minimal VM's with
> > > > limited ram/disk with gentoo. We have the resources - they are just not
> > > > being assigned and being able to provision your own is far too complex
> > > > with what we have. If all you had to do was run some libvirt cmds to
> > > > spin up a new VM of whatever size/config you wanted , I think you'd be
> > > > fine.
> > > 
> > > Well, e5 clearly has not enough memory and CPU to support all the build
> > > ran by Jenkins, this is why we had to split the building instances from
> > 
> > That I just don't buy. I compile all of e, efl, terminology, rage on a
> > raspberry pi with 768m ram (256 partitioned off to gpu) and do parallel
> > builds... and can run a gui at the same time. e5 has 48gb of ram. last i
> > heard from stefan the vm's for building had maybe 2 or 4gb ram allocated to
> > them and limited disk space. correct me if i'm wrong - this may have been a
> > while ago.
> 
> Memory is not the issue here, CPU is. Each VM has 4GB or RAM, each build
> use -j6 and we can have up to 4 jenkins build at the same time, this on
> 3 different VM.
> 
> Read this a different way: having build and servers (web, git etc) is not
> achievable.

OH amount of CPU is indeed not "infinite". We have plenty of disk and RAM to go
around. The way I see it is "keep build jobs in the background and they take
however long they take". Developers should be able to run builds on their own
boxes far faster than the shared infra and all this kind of QA/CI should be
easily reproducible in a more minimal form on developer workstations. This
scales out the load. I have a feeling we're just trying to do too much in a
central service that SHOULD be being done by developers already as they
build/develop etc.

> > compared to a raspberry pi .. e5 runs rings around it so many times it's not
> > funny and an rpi can do this easily enough. yes - jenkins adds infra cost
> > itself, but a single vm for linux builds (with multiple chroots) would
> > consume very little resources
> 
> That is true, the VM overhead is not negligible. VM were the initial
> design and we stuck on this. I am far from being against that as I'm far
> from being against containers, finding the right time to work on this is
> a different matter.

Indeed VM's are not free. they cost a whole new OS instance in RAM etc. - thus
why I mention chroots for example. Containers add some more isolation but a
single "builder VM" with just chroots should do the trick for any Linux builds.

I'm trying to just point out that when you try and do "too much" you create
work and trouble for yourselves. If you try and behave like you have infinite
access to infrastructure then you will get into trouble. Behave like your infra
has limits and you use it sensibly and everything can work just fine.

I'm OK having infra that is not on e5/our hw that we can "live without". For
example - if someone wants to have build slaves scale out across
hosted/sponsored machines to get more builds done per day... that's fine. If
they go away we turn them off and just do fewer builds per day (like above).
THAT I'm OK with. IT allows a fairly painless degrade in service when that
happens. :)

> > as it would only need a single build controller and just
> > spawn off scripts per build that do the chroot fun.
> > 
> > sure - need a vm for bsd, and windows and other OS's that can't do the
> > chroot trick.
> > 
> > > the hosting instances. Even still, current ressources are too limited.
> > > You will not be able to have more than 10 instances running at the same
> > > time.
> > 
> > 10 build instances? if they are properly ionice'd and niced to be background
> > tasks vs www etc... i think we can,. they might take longer as the xeons are
> > old on the server, but they can do the task still. i regularly build efl/e
> > on hardware a tiny fraction of the power of e5.
> 
> We don't just instances for build, we have instances for web, mail, git,
> phab etc .. Which by the way were moved to e6 last year after the
> website was pretty much unsable and the disk issue we had, server that I'm
> still paying myself. This was meant to be a temporary solution, but I
> did not find the appropriate time to allocate on putting stuff back.

Yeah. I know things moved to e6. I'd like to move stuff back too. I don't want
you paying for this... We have infra. I just ordered a replacement disk BTW for
e5... :)

Maybe it'd make sense to set up that single "VM" on e5 and then move stuff into
it. so it's just e5 -> VM and this VM just tuns shared hosting and/or chroots
and containers?

> Cheers
> 
> -- 
> Bertrand


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
Carsten Haitzler - ras...@rasterman.com



_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to