> OH amount of CPU is indeed not "infinite". We have plenty of disk and RAM to 
> go
> around. The way I see it is "keep build jobs in the background and they take
> however long they take". Developers should be able to run builds on their own
> boxes far faster than the shared infra and all this kind of QA/CI should be
> easily reproducible in a more minimal form on developer workstations. This
> scales out the load. I have a feeling we're just trying to do too much in a
> central service that SHOULD be being done by developers already as they
> build/develop etc.

Agreed

> > > compared to a raspberry pi .. e5 runs rings around it so many times it's 
> > > not
> > > funny and an rpi can do this easily enough. yes - jenkins adds infra cost
> > > itself, but a single vm for linux builds (with multiple chroots) would
> > > consume very little resources
> > 
> > That is true, the VM overhead is not negligible. VM were the initial
> > design and we stuck on this. I am far from being against that as I'm far
> > from being against containers, finding the right time to work on this is
> > a different matter.
> 
> Indeed VM's are not free. they cost a whole new OS instance in RAM etc. - thus
> why I mention chroots for example. Containers add some more isolation but a
> single "builder VM" with just chroots should do the trick for any Linux 
> builds.
> 
> I'm trying to just point out that when you try and do "too much" you create
> work and trouble for yourselves. If you try and behave like you have infinite
> access to infrastructure then you will get into trouble. Behave like your 
> infra
> has limits and you use it sensibly and everything can work just fine.
> 
> I'm OK having infra that is not on e5/our hw that we can "live without". For
> example - if someone wants to have build slaves scale out across
> hosted/sponsored machines to get more builds done per day... that's fine. If
> they go away we turn them off and just do fewer builds per day (like above).
> THAT I'm OK with. IT allows a fairly painless degrade in service when that
> happens. :)
> 
> > > as it would only need a single build controller and just
> > > spawn off scripts per build that do the chroot fun.
> > > 
> > > sure - need a vm for bsd, and windows and other OS's that can't do the
> > > chroot trick.
> > > 
> > > > the hosting instances. Even still, current ressources are too limited.
> > > > You will not be able to have more than 10 instances running at the same
> > > > time.
> > > 
> > > 10 build instances? if they are properly ionice'd and niced to be 
> > > background
> > > tasks vs www etc... i think we can,. they might take longer as the xeons 
> > > are
> > > old on the server, but they can do the task still. i regularly build efl/e
> > > on hardware a tiny fraction of the power of e5.
> > 
> > We don't just instances for build, we have instances for web, mail, git,
> > phab etc .. Which by the way were moved to e6 last year after the
> > website was pretty much unsable and the disk issue we had, server that I'm
> > still paying myself. This was meant to be a temporary solution, but I
> > did not find the appropriate time to allocate on putting stuff back.
> 
> Yeah. I know things moved to e6. I'd like to move stuff back too. I don't want
> you paying for this... We have infra. I just ordered a replacement disk BTW 
> for
> e5... :)

Appreciated

> Maybe it'd make sense to set up that single "VM" on e5 and then move stuff 
> into
> it. so it's just e5 -> VM and this VM just tuns shared hosting and/or chroots
> and containers?

That would be the right thing to do from what we have available today. Let's fix
the fan and disk issue before touching anything to avoid mixing
different problems at the same time.

Cheers

-- 
Bertrand

Attachment: signature.asc
Description: Digital signature

_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to