----- Original Message -----
> From: "Philippe Mathieu-Daudé" <phi...@redhat.com>
> To: "Eduardo Habkost" <ehabk...@redhat.com>, "Cleber Rosa" <cr...@redhat.com>
> Cc: "Aleksandar Rikalo" <arik...@wavecomp.com>, "Philippe Mathieu-Daudé" 
> <f4...@amsat.org>, "Wainer dos Santos
> Moschetta" <waine...@redhat.com>, qemu-devel@nongnu.org, "Aleksandar 
> Markovic" <aleksandar.m.m...@gmail.com>,
> "Aleksandar Markovic" <amarko...@wavecomp.com>, "Aurelien Jarno" 
> <aurel...@aurel32.net>
> Sent: Thursday, May 23, 2019 5:38:34 AM
> Subject: Re: [Qemu-devel] [PATCH 0/4] mips: Add more Avocado tests
> 
> On 5/23/19 1:07 AM, Eduardo Habkost wrote:
> > On Wed, May 22, 2019 at 05:46:06PM -0400, Cleber Rosa wrote:
> >> ----- Original Message -----
> >>> From: "Eduardo Habkost" <ehabk...@redhat.com>
> >>> On Tue, May 21, 2019 at 01:19:06AM +0200, Philippe Mathieu-Daudé wrote:
> >>>> Hi,
> >>>>
> >>>> It was a rainy week-end here, so I invested it to automatize some
> >>>> of my MIPS tests.
> >>>>
> >>>> The BootLinuxSshTest is not Global warming friendly, it is not
> >>>> meant to run on a CI system but rather on a workstation previous
> >>>> to post a pull request.
> >>>> It can surely be improved, but it is a good starting point.
> >>>
> >>> Until we actually have a mechanism to exclude the test case on
> >>> travis-ci, I will remove patch 4/4 from the queue.  Aleksandar,
> >>> please don't merge patch 4/4 yet or it will break travis-ci.
> >>>
> >>> Cleber, Wainer, is it already possible to make "avocado run" skip
> >>> tests tagged with "slow"?
> >>>
> >>
> >> The mechanism exists, but we haven't tagged any test so far as slow.
> >>
> >> Should we define/document a criteria for a test to be slow?  Given
> >> that this is highly subjective, we have to think of:
> >>
> >>  * Will we consider the average or maximum run time (the timeout
> >>    definition)?
> >>  
> >>  * For a single test, what is "slow"? Some rough numbers from Travis
> >>    CI[1] to help us with guidelines:
> >>    - boot_linux_console.py:BootLinuxConsole.test_x86_64_pc:  PASS (6.04 s)
> >>    - boot_linux_console.py:BootLinuxConsole.test_arm_virt:  PASS (2.91 s)
> >>    -
> >>    
> >> linux_initrd.py:LinuxInitrd.test_with_2gib_file_should_work_with_linux_v4_16:
> >>    PASS (18.14 s)
> >>    - boot_linux.py:BootLinuxAarch64.test_virt:  PASS (396.88 s)
> > 
> > I don't think we need to overthink this.  Whatever objective
> > criteria we choose, I'm sure we'll have to adapt them later due
> > to real world problems.
> > 
> > e.g.: is 396 seconds too slow?  I don't know, it depends: does it
> > break Travis and other CI systems often because of timeouts?  If
> > yes, then we should probably tag it as slow.
> > 
> > If having subjective criteria is really a problem (I don't think
> > it is), then we can call the tag "skip_travis", and stop worrying
> > about defining what exactly is "slow".
> 
> I'd go with a simpler "tags:travis-ci" whitelisting any job expecting to
> run smoothly there.
> 

My concern is what becomes of "make check-acceptance".  Should we introduce
another target, say, "make check-acceptance-ci" or just change its meaning
and reuse it?

> Then we can add "slow" tests without having to worry about blacklisting
> for Travis CI.
> Also, Other CI can set different timeouts.
> 
> I'd like maintainers to add as many tests as they want to upstream, so
> these tests can eventually run by anyone, then each maintainer is free
> to select which particular set he wants to run as default.
> 

OK, so this matches the idea of carefully curating a set of tests for
CI.  WRT white or blacklisting, I favor the approach that requires the
least effort from the developer to have its test enabled, so I'd go
with blacklisting.  I fear that simple tests will just sit on the repo
without being properly exercised if we need to whitelist them.

But, I'll certainly and gladly accept the majority's opinion here.  

Regards,
- Cleber.

> >>  * Do we want to set a maximum job timeout?  This way we can skip
> >>    tests after a given amount of time has passed.  Currently we interrupt
> >>    the test running when the job timeout is reached, but it's possible
> >>    to add a option so that no new tests will be started, but currently
> >>    running ones will be waited on.
> > 
> > I'm not sure I understand the suggestion to skip tests.  If we
> > skip tests after a timeout, how would we differentiate a test
> > being expectedly slow from a QEMU hang?
> > 
> 


Reply via email to