On Thu, Dec 10, 2020 at 11:22 PM Michael Scherer <msche...@redhat.com> wrote:
> Le jeudi 10 décembre 2020 à 22:06 +0530, sankarshan a écrit : > > What is your recommendation? As in, the next steps from here. > > - check if there is c8s image on amazon already > > if there is one > - switch the image and reinstall the builders (3rd time this week, so I > should not stumble like the previous 2) > > if not > - install centos8-stream-release rpm, dnf upgrade -y, reboot > - add the rpm in ansible so it will be here if we reinstall > - wait for a proper ec2 image and add it to ansible > > Given we had kernel bugs in the past ( > https://github.com/gluster/glusterfs/issues/1402#issuecomment-666358241 > ), I think faster access to fixes for the CI (or even for production) > is a good idea. > > Please go ahead! This looks like a good plan to be adhering to the centos stream... For me, from outside, centos8-stream is similar to fedora rawhide, a moving base. As long as tests are running fine, we are fine, when it breaks, to debug, there are more than glusterfs elements to consider now. Regards, Amar > On Thu, 10 Dec 2020 at 21:28, Michael Scherer <msche...@redhat.com> > > wrote: > > > > > > Le jeudi 10 décembre 2020 à 21:14 +0530, sankarshan a écrit : > > > > There are 2 specific bits which I expected to stimulate in > > > > discussion > > > > > > > > [1] a review by the Gluster Infrastructure team in terms of > > > > whether > > > > there is any change in the processes/environment > > > > > > I was about to ask. For now, we run C8 almost nowhere, except 2 > > > builders. > > > > > > I pondered between switching both of them to C8s, or reinstall 2 on > > > C8s > > > and keep 2 on C8, to compare. > > > > > > I do not expect disruptive change on c8s that wouldn't already > > > happen > > > on c8 with a minor version, so I am ok to just switch, I just do > > > not > > > have any Centos 8 to test. > > > > > > > > > > > > > [2] whether the maintainers will consider reviewing this in > > > > entirety > > > > and be able to assess the impact > > > > > > > > To my knowledge this topic was not previously brought up at any > > > > of > > > > the > > > > Gluster meetings, so it is worth requesting all parties involved > > > > to > > > > take a moment to form their opinions and use appropriate forums > > > > to > > > > discuss that. > > > > > > > > On Wed, 9 Dec 2020 at 14:04, Niels de Vos <nde...@redhat.com> > > > > wrote: > > > > > > > > > > On Tue, Dec 08, 2020 at 09:00:35PM +0530, sankarshan wrote: > > > > > > FYI. Would likely be important in context of packaging, > > > > > > testing > > > > > > and > > > > > > release content > > > > > > > > > > Indeed, we currently build packages in the CentOS Storage SIG > > > > > against > > > > > CentOS Linux, and not against CentOS Stream. But other than > > > > > that, I > > > > > do > > > > > not expect major visible changes for our users. > > > > > > > > > > The main advantage is that we can more directly contribute to > > > > > the > > > > > distribution. CentOS Stream allows us to send PRs that get > > > > > reviewed > > > > > by > > > > > Red Hat Enterprise Linux developers and potentially get > > > > > included. That > > > > > means, enhancements to FUSE or other components do not need to > > > > > rely > > > > > on > > > > > the work Red Hat is planning, but could be worked on by our > > > > > community > > > > > and get included earlier. > > > > > > > > > > If there are any concerns, I'd love to hear about it. > > > > > > > > > > Thanks, > > > > > Niels > > > > > > > > > > > > > > -- > > > Michael Scherer / He/Il/Er/Él > > > Sysadmin, Community Infrastructure > > > > > > > > > > > > > > -- > Michael Scherer / He/Il/Er/Él > Sysadmin, Community Infrastructure > > > > _______________________________________________ > Gluster-infra mailing list > Gluster-infra@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-infra -- -- https://kadalu.io Container Storage made easy!
_______________________________________________ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra