On Thu, Feb 11, 2021 at 05:37:20PM +0000, Peter Maydell wrote: > On Thu, 11 Feb 2021 at 17:25, Cleber Rosa <cr...@redhat.com> wrote: > > IIUC, this specic issue was caused by a runaway QEMU. Granted, it was > > started by an Avocado test. I've opened a bug report to look into the > > possibilities to mitigate or prevent this from happening again: > > I wonder if we could have avocado run all our acceptance cases > under a 'ulimit -f' setting that restricts the amount of disk > space they can use? That would restrict the damage that could > be done by any runaways. A CPU usage limit might also be good. > > thanks > -- PMM >
To me that sounds a lot like Linux cgroups. I can see either someone setting up cgroups and having Avocado run in it (then all tests inherit from this common parent), or alternatively Avocado setting up cgroups for each of the tests. The former seems simpler and effective wrt preventing system resources. I can see a use case for the later when tests actually want to verify a behavior when certain resources are constrained. We can have a script setting up a cgroup as part of a gitlab-ci.{yml,d} job for the jobs that will run on the non-shared GitLab runners (such as the s390 and aarch64 machines owned by the QEMU project). Does this sound like a solution? Thanks, - Cleber.
signature.asc
Description: PGP signature