I wonder what the best way to improve the situation would be. There are two
issues as a I see it:
a) Some packages (pipewire.i686) are not in the default installation and
therefore our automated checks don't catch it. The manual checks are often
done with default installations as well, but someone with a real-world
machine can stumble on it.
b) Some packages (I think this is the case of iptables) are in a default
installation, but are in updates-testing, which we ignore during upgrade
testing. On one hand, it makes sense, we want to make sure the stable repo
is functional, and updates-testing often includes broken stuff which could
prevent us from running the upgrade test completely. On the other hand, on
the release day, a lot of packages get pushed from updates-testing to
updates, and it might result in broken upgrades, while we had no knowledge
about it. To complicate it even further, it's a constantly moving target,
i.e. the set of packages which are "pending stable" (to be pushed to
updates when the freeze is over) can change literally every minute.

The true fix here for both cases would be a static test which would check
the installability of packages, and wouldn't allow pushing updates which
don't pass. Something similar to rpmdeplint which we used to run in
Taskotron (rpmdeplint itself had many issues, it would need to get much
improved or written differently, to be able to make it a reliable gating
test). But we don't have it and likely won't have it for a long time.

So what can we do which is easier and at least partly covers the issues?
For a) I think we need to depend on community reporting, because we can
hardly test a random combination of installed packages. But for b), we
could extend our upgrade testing matrices with two variants - one would
test with stable updates, the other would test with updates-testing
enabled. It is not perfect, because any broken update in updates-testing
can obscure other issues present ("dnf --exclude" is not always the
remedy), so we still can miss important problems. We can't force package
maintainers to unpush problematic updates just because we want to test the
rest of the repo. And even if we could, refreshing the repo takes at least
a day, so there's a very slow turnaround. However, it gives us more chance
to spot potential issues and make sure that they are reported and that they
don't get pushed stable (by giving them negative karma). Thoughts?

We can also test upgrades once the updates repo gets populated, as you
mentioned. It's pretty late to detect problems at this stage :-/ , but
better late than never, I guess. I wonder if we could use openQA for this
[1]. Adam?

[1] Which reminds me, we should make sure to run "dnf system-upgrade" with
"--best", to catch these issues.
_______________________________________________
test mailing list -- test@lists.fedoraproject.org
To unsubscribe send an email to test-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/test@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to