On Tue, Jun 8, 2021 at 2:30 AM Philippe Mathieu-Daudé <f4...@amsat.org>
wrote:

> Hi Alex, Stefan,
>
> On 6/8/21 5:14 AM, Cleber Rosa wrote:
> > The QEMU project has two machines (aarch64 and s390x) that can be used
> > for jobs that do build and run tests.
>
> AFAIK there is more hardware available to the project, so I'm wondering
> what happened to the rest, is it a deliberate choice to start small?
>

Hi Phil,

Yes, this series was deliberately focused on the first two machines owned
and available to QEMU.


> What will happen with the rest, since we are wasting resources?
>

The plan is to allow all machines (currently available and to-be available)
to be connected as custom runners.  This hopefully gets that started.

About "more hardware available to the project", there's one VM from
fosshost which was made available not long ago, and which I set up even
more recently, which could be used as a gitlab runner too.  But, even
though some new hardware resource is available (and wasted?), the human
resources to maintain them have not been properly determined, so I believe
it's a good decision to start with the machines that have been operational
for long, and that already have to the best of my knowledge, people
maintaining them.

I also see a "Debian unstable mips64el (Debian) @ cipunited.cn" registered
as a runner, but I don't have extra information about it.

Besides that, I'll send another series shortly, that builds upon this
series, and adds a Red Hat focused job, on a Red Hat managed machine.  This
should be what other entities should be capable of doing and allowed to do.


> Who has access to what and should do what (setup)? How is this list of
> hw managed btw? Should there be some public visibility (i.e. Wiki)?
>

These are good questions, and I believe Alex can answer them about those
two machines.  Even though I hooked them up to GitLab, AFAICT he is the
ultimate admin (maybe Peter too?).

About hardware management, it has been suggested to use either the Wiki or
a MAINTAINERS entry.  This is still unresolved and feedback would be
appreciated.  For me to propose a MAINTAINERS entry, say, on a v7, I'd need
the information on who is managing them.


> Is there a document explaining what are the steps to follow for an
> entity to donate / sponsor hardware? Where would it be stored, should
> this hw be shipped somewhere? What documentation should be provided for
> its system administration?
>
> In case an entity manages hosting and maintenance, can the QEMU project
> share the power bill? Up to which amount?
> Similar question if a sysadmin has to be paid.
>
> If the QEMU project can't spend money on CI, what is expected from
> resource donators? Simply hardware + power (electricity) + network
> traffic? Also sysadmining and monitoring? Do we expect some kind of
> reliability on the data stored here or can we assume disposable /
> transient runners?
> Should donators also provide storage? Do we have a minimum storage
> requirement?
>
> Should we provide some guideline explaining any code might be run by
> our contributors on these runners and some security measures have to
> be taken / considered?
>
> Sponsors usually expect some advertising to thanks them, and often
> regular reports on how their resources are used, else they might not
> renew their offer. Who should care to keep the relationship with
> sponsors?
>
> Where is defined what belong to the project, who is responsible, who can
> request access to this hardware, what resource can be used?
>
>
You obviously directed the question towards Alex and Stefan (rightfully
so), so I won't attempt to answer these ones at this point.


> More generically, what is the process for a private / corporate entity
> to register a runner to the project? (how did it work for this aarch64
> and s390x one?)
>

The steps are listed on the documentation.  Basically anyone with knowledge
of the "registration token" can add new machines to GitLab as runners.  For
the two aarch64 and s390x, it was a matter of following the documentation
steps which basically involve:

1) providing the hostname(s) in the inventory file
2) provide the "registration token" in the vars.yml file
3) running the playbooks


>
> What else am I missing?
>
>
I think you're missing the answers to all your good questions :).

And I understand that are a lot of them (from everyone, including myself).
The dilemma here is: should we activate the machines already available, and
learn in practice, what's missing?  I honestly believe we should.

Thanks,
- Cleber.


> Thanks,
>
> Phil.
>
> > This introduces those jobs,
> > which are a mapping of custom scripts used for the same purpose.
> >
> > Signed-off-by: Cleber Rosa <cr...@redhat.com>
> > ---
> >  .gitlab-ci.d/custom-runners.yml | 208 ++++++++++++++++++++++++++++++++
> >  1 file changed, 208 insertions(+)
>
>

Reply via email to