On Wed, Jun 2, 2021 at 11:08 AM Cleber Rosa Junior <cr...@redhat.com> wrote: > > > > On Thu, May 27, 2021 at 9:45 AM Willian Rampazzo <wramp...@redhat.com> wrote: >> >> On Wed, May 26, 2021 at 8:41 PM Cleber Rosa <cr...@redhat.com> wrote: >> > >> > On Wed, May 26, 2021 at 05:56:01PM -0300, Willian Rampazzo wrote: >> > > The current host for the image >> > > Armbian_20.08.1_Orangepipc_bionic_current_5.8.5.img.xz >> > > (archive.armbian.com) is extremely slow in the last couple of weeks, >> > > making the job running the test >> > > tests/system/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_bionic_20_08 >> > > for the first time when the image is not yet on GitLab cache, time out >> > > while the image is being downloaded. >> > > >> > > This changes the host to one faster, so new users with an empty cache >> > > are not impacted. >> > > >> > > Signed-off-by: Willian Rampazzo <willi...@redhat.com> >> > > --- >> > > tests/acceptance/boot_linux_console.py | 3 ++- >> > > 1 file changed, 2 insertions(+), 1 deletion(-) >> > > >> > > diff --git a/tests/acceptance/boot_linux_console.py >> > > b/tests/acceptance/boot_linux_console.py >> > > index 276a53f146..51c23b822c 100644 >> > > --- a/tests/acceptance/boot_linux_console.py >> > > +++ b/tests/acceptance/boot_linux_console.py >> > > @@ -804,7 +804,8 @@ def test_arm_orangepi_bionic_20_08(self): >> > > # to 1036 MiB, but the underlying filesystem is 1552 MiB... >> > > # As we expand it to 2 GiB we are safe. >> > > >> > > - image_url = ('https://archive.armbian.com/orangepipc/archive/' >> > > + image_url = ('https://armbian.systemonachip.net/' >> > > + 'archive/orangepipc/archive/' >> > >> > Hi Willian, >> > >> > I was pretty annoyed by my pipeline failures, that I came up with: >> > >> > >> > https://gitlab.com/cleber.gnu/qemu/-/commit/917b3e376e682e9c35c6f7f597ffca110c719e13 >> > >> > To prove that it was a GitLab <-> archive.arbian.com issue. >> >> When I tried both links, the slow link, and this new link, on my >> machine, I could see the slow link is also slow locally. Not as slow >> as on GitLab, but 10 times slower than this new link. I was thinking >> about open an issue on GitLab. In the worst case, they will say it is >> not their fault, but a problem on the other end. >> >> > But I wonder: >> > >> > 1. how susceptible to the same situation is this other mirror? >> >> Unfortunately, having tests depending on external artifacts will bring >> this kind of situation. Unless GitLab is doing traffic shaping, we >> will never know how susceptible an external server is to any kind of >> instability. >> >> > 2. how trustworthy is this mirror, say, stability wise? Maybe >> > people in the armbian community would have some info? >> >> This new link is the same link that >> https://www.armbian.com/orange-pi-pc/ "Archived versions" is pointing, >> so I consider it an official mirror from Armbian. That's why I have >> not thought much about changing it. >> >> Now, stability wise, we never know :) I don't think we have this >> answer for any of the links related to external artifacts QEMU >> acceptance tests use. >> >> > >> > Depending on the feedback we get about, this can be a very valid >> > hotfix/workaround indeed. But the core issues we need to look into >> > are: >> > >> > a. applying a timeout when fetching assets. If the asset fails to be >> > fetched within the timeout, the test simply gets canceled. >> >> But this is failing during the download before the test starts, or in >> the pre-phase. The test suite was not created and Avocado don't have a >> mapping asset <=> test yet. >> > > Right. But my point is that if it times out, then this "best effort" attempt > would fail (but not abort the job). Then, during the test itself, > considering `cancel_on_missing=True`, the test would also cancel when it > fails to access the asset. > > A canceled test is what we want here, and not a stuck job. That's why I > still think the timeout may be a solution. >
Okay, got it! I opened an issue to track this feature: https://github.com/avocado-framework/avocado/issues/4643 Anyway, for now, I think changing the URL gives us some time until we have the problem again :) > Thanks, > - Cleber. >