[ovirt-devel] Re: Integrating OST with artifacts built in github

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 19:09, Nir Soffer  wrote:
> 
> Looking this very helpful document
> https://ovirt.org/develop/developer-guide/migrating_to_github.html
> 
> The suggested solution is to create artifacts.zip with all the rpms
> for a project.
> 
> But to use the rpms in OST, we need to create a yum repository before 
> uploading
> the artifacts, so we can pass a URL of a zip file with a yum repository.
> 
> Here is what we have now in ovirt-imageio:
> 
> 1. We build for multiple distros:
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L32
> 
> 2. Every build creates a repo in exported-artifacts
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/ci/rpm.sh#L7
> 
> 3. Every build uploads the exported artifacts to rpm-{distro}.zip
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L53

Yes, something like this looks ideal. The only thing I'd like to get to is a 
common organization-wide template or action  so that we do not have to 
reimplement this in every single oVirt project.

> 
> An example build:
> https://github.com/nirs/ovirt-imageio/actions/runs/1531658722
> 
> To start OST manually, developer can copy a link the right zip file
> (e.g for centos stream 8):
> https://github.com/nirs/ovirt-imageio/suites/4535392764/artifacts/121520882
> 
> And pass the link to OST build with parameters job.
> 
> In this solution, OST side gets a repo that can be included in the
> build without any
> additional code or logic - just unzip and use the repo from the directory.

We plan to add this to OST directly, we currently have a helper code that 
handles stdci's jenkins repos, we can implement similar functionality for GH's 
zip files

> 
> I think this is the minimal solution to allow running OST with built
> artifacts from github.
> 
> For triggering jobs automatically, we will need a way to find the
> right artifacts for a build,
> or use some convention for naming the artifacts in all projects.

yeah, so probably a common oVirt's action that does the repo creation and is 
used by all projects would do the job...

> 
> I started with the simple convention of jobname-containername since it
> is easy to integrate
> with the infrastructure we already have in the project.
> 
> Nir
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K46FB3JIV6HALXJKC3MARNHDWHAXPG5K/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VWVEKLOM5OVM7E632BATLL3YOD762MVE/


[ovirt-devel] Integrating OST with artifacts built in github

2021-12-02 Thread Nir Soffer
Looking this very helpful document
https://ovirt.org/develop/developer-guide/migrating_to_github.html

The suggested solution is to create artifacts.zip with all the rpms
for a project.

But to use the rpms in OST, we need to create a yum repository before uploading
the artifacts, so we can pass a URL of a zip file with a yum repository.

Here is what we have now in ovirt-imageio:

1. We build for multiple distros:
https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L32

2. Every build creates a repo in exported-artifacts
https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/ci/rpm.sh#L7

3. Every build uploads the exported artifacts to rpm-{distro}.zip
https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L53

An example build:
https://github.com/nirs/ovirt-imageio/actions/runs/1531658722

To start OST manually, developer can copy a link the right zip file
(e.g for centos stream 8):
https://github.com/nirs/ovirt-imageio/suites/4535392764/artifacts/121520882

And pass the link to OST build with parameters job.

In this solution, OST side gets a repo that can be included in the
build without any
additional code or logic - just unzip and use the repo from the directory.

I think this is the minimal solution to allow running OST with built
artifacts from github.

For triggering jobs automatically, we will need a way to find the
right artifacts for a build,
or use some convention for naming the artifacts in all projects.

I started with the simple convention of jobname-containername since it
is easy to integrate
with the infrastructure we already have in the project.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K46FB3JIV6HALXJKC3MARNHDWHAXPG5K/


[ovirt-devel] Re: Another CI failure

2021-12-02 Thread Ehud Yonasi
Fixed

> On 1 Dec 2021, at 11:07, Milan Zamazal  wrote:
> 
> Ehud Yonasi  writes:
> 
>> It was a problem with global setup with el9 node - there was a missing 
>> package and it was removed in:
>> https://gerrit.ovirt.org/c/jenkins/+/117867 
>> 
> 
> Thank you!
> 
> Unfortunately, there is another problem, on an el8 node:
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/31165/
> 
> Apparently this:
> 
>  Error: Unable to find a match: python-pyyaml python-jinja2 python-six 
> python-pyxdg
> 
>>> On 30 Nov 2021, at 22:14, Michal Skrivanek  
>>> wrote:
>>> 
>>> 
>>> 
 On 30. 11. 2021, at 14:37, Milan Zamazal  wrote:
 
 Hi,
 
 as demonstrated in
 https://jenkins.ovirt.org/job/vdsm_standard-check-patch/31133/, OST
 builds can at least start now but still fail, apparently due to the
 following:
 
 + sudo -n usermod -a -G jenkins qemu
 usermod: user 'qemu' does not exist
 + log ERROR 'Failed to add user qemu to group jenkins'
 + local level=ERROR
 + shift
 + local 'message=Failed to add user qemu to group jenkins'
 + local prefix
 + [[ 4 -gt 1 ]]
 + prefix='global_setup[lago_setup]'
 + echo 'global_setup[lago_setup] ERROR: Failed to add user qemu to group 
 jenkins'
 global_setup[lago_setup] ERROR: Failed to add user qemu to group jenkins
 + return 1
 + failed=true
 
 What can be done about it?
>>> 
>>> it passed the next time, perhaps a faulty jenkins node that doesn't build?
>>> 
 
 Thanks,
 Milan
 ___
 Devel mailing list -- devel@ovirt.org 
 To unsubscribe send an email to devel-le...@ovirt.org 
 
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 
 List Archives:
 https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KLQU6KMMORGVDSENBY4COFQZFHAPBS7R/
 
>>> ___
>>> Infra mailing list -- in...@ovirt.org 
>>> To unsubscribe send an email to infra-le...@ovirt.org 
>>> 
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> 
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> 
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/AAUUN64LO5JJRXDI2CL4DBGYX3AZTJN2/
>>> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CHAYIBTSTS6NMSSM6NGJKXCKOU7A47A6/


[ovirt-devel] Re: Updates on oVirt Node and oVirt appliance building outside Jenkins

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 13:19, Sandro Bonazzola  wrote:
> 
> Hi, just a quick update on current issues in trying to build oVirt Node and 
> the engine appliance outside Jenkins.
> 
> 1) Using GitHub Actions: an attempt to build it is in progress here: 
> https://github.com/sandrobonazzola/ovirt-appliance/pull/1 
> 
> it's currently failing due to lorax not being able to perform the build. It 
> kind of make sense as we are trying to do a virt-install within a container 
> without the needed virtualization hardware exposed.
> I'm currently investigating how to make use of software virtualization in 
> order to drop the requirement on missing hardware / nested virtualization.
> Also investigating on how to use self hosted runners for providing a build 
> system with usable virtualization hardware.

it can usually be worked around by bypassing libvirt and/or using full 
emulation.
can we somehow get to the virt-install log?

> 
> 2) Using COPR: we have basically the same issue: lorax fails not having 
> access to the virtualization hardware
> 
> 3) Using CentOS Community Build System
> This is a fully fledged Koji instance and it allows to build using the 
> image-build variant. It has a completely different configuration system and 
> it is more similar to what we are doing within the downstream build of oVirt 
> Node. An attempt of providing the configuration started here: 
> https://gerrit.ovirt.org/c/ovirt-appliance/+/117801 
> 
> The issue there is that all the packages needed to be included within Node 
> and Appliance must be built within CentOS Community Build System build root.
> The system has no external access to the internet so everything we need it 
> needs to come from CentOS infra.
> 
> I haven't started digging into oVirt Node but the build flow is very similar 
> to the appliance one, so once one is solved, the other should be simple.
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> 

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P7SNY6SSQYNNLFVV5NQIRPURDOEIIQAD/


[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 12:51, Milan Zamazal  wrote:
> 
> Michal Skrivanek  writes:
> 
>>> On 1. 12. 2021, at 16:57, Nir Soffer  wrote:
>>> 
>>> On Wed, Dec 1, 2021 at 11:38 AM Milan Zamazal  wrote:
 
 Michal Skrivanek  writes:
 
> Hi all,
> so far we haven't encounter any blocking issue with this effort, I
> wanted to propose to decide on oVirt development moving to GitHub,
> COPR and CBS. Recent issue with decommissioning of our CI datacenter
> is a good reminder why we are doing that...
> What do we want to do?
> 1) move "ovirt-master-snapshot" compose to COPR
> it is feasible for all projects except ovirt-node and
> appliance due to COPR limitations, for these two we plan to use a
> self-hosted runner in github env.
> it replaces the "build-artifacts" stdci stage
> 2) move release to CentOS Community Build System to simplify our oVirt 
> releases
> replaces our custom releng-tools process and aligns us better
> with CentOS that is our main (and only) platform we support.
> 3) move development from Gerrit to GitHub
> this is a very visible change and affects every oVirt
> developer. We need a way how to test posted patches and the current
> stdci "check-patch" stage is overly complex and slow to run, we lack
> people for stdci maintenance in general (bluntly, it's a dead
> project). Out of the various options that exist we ended up converging
> to Github. Why? Just because it's the most simple thing to do for us,
> with least amount of effort, least amount of additional people and hw
> resources, with a manageable learning curve. It comes at a price - it
> only works if we switch our primary development from Gerrit to Github
> for all the remaining projects. It is a big change to our processes,
> but I believe we have to go through that transition in order to solve
> our CI troubles for good.  We started preparing guide and templates to
> use so that we keep a uniform "look and feel" for all sub-projects, is
> shall be ready soon.
> 
> I'd like us to move from "POC" stage to "production", and actively
> start working on the above, start moving project after project.
> Let me ask for a final round of thoughts, comments, objections, we are 
> ready to go ahead.
 
 Hi,
 
 the Vdsm maintainers have discussed the possibility of moving Vdsm
 development to GitHub and we consider it a reasonable and feasible
 option.  Although GitHub is not on par with Gerrit as for code reviews,
 having a more reliable common development platform outweighs the
 disadvantages.  There is already an ongoing work on having a fully
 usable Vdsm CI on GitHub.
 
 One thing related to the move is that we would like to retain the
 history of code reviews from Gerrit.  The comments there contain
 valuable information that we wouldn't like to lose.  Is there a way to
 export the public Gerrit contents, once we make a switch to GitHub for
 each particular project, to something that could be reasonably used for
 patch archaeology when needed?
>>> 
>>> I think keeping a readonly instance would be best, one all projects
>>> migrated to github.
>> 
>> 
>>> 
>>> I hope there is a way to export the data to static html so it will be
>>> available forever without running an actual gerrit instance.
>> 
>> no idea...it can definitely be scraped patch after patch..but it's
>> going to be really huge and again, it will keep running, there's no
>> plan to shut it down or anything.
>> gerrit.ovirt.org will stay up 
> 
> And working / properly maintained?  Can you guarantee it will remain
> usable for the relevant purposes?  If yes then it would be indeed the
> best option.

why not? it has been in care of the infra@ovirt team for more than a decade 
now, why would it change?
The reason for our move to github is primarily due to stdci "issues. It's 
hurting our effectiveness (for a long time now) and it will become obsolete by 
a something that doesn't require that much babysitting. So we'll drop it.
But that is no reason to drop useful things. 
For as long as it is helpful it will stay, of course. Beyond that - probably 
not. Same happened to the project's history before - whole history before open 
sourcing is entirely gone, and reviews and review comments from the time when 
we used gerrit.usersys.redhat.com before we fully moved to gerrit.ovirt.org in 
2011/2012 are gone too. At some point it became useless to keep it alive.


> 
>> for as long as it's needed and relevant. If it ever comes to shutting
>> it down I don't think there's going to be anyone caring about the
>> comments
>> 
>>> 
>>> Nir
>>> 
 
> It's not going to be easy, but I firmly believe it will greatly
> improve maintainability of oVirt and reduce overhead that we all
> struggle with for years.
> 
> Thanks,
> michal
> 

[ovirt-devel] Updates on oVirt Node and oVirt appliance building outside Jenkins

2021-12-02 Thread Sandro Bonazzola
Hi, just a quick update on current issues in trying to build oVirt Node and
the engine appliance outside Jenkins.

1) Using GitHub Actions: an attempt to build it is in progress here:
https://github.com/sandrobonazzola/ovirt-appliance/pull/1
it's currently failing due to lorax not being able to perform the build. It
kind of make sense as we are trying to do a virt-install within a container
without the needed virtualization hardware exposed.
I'm currently investigating how to make use of software virtualization in
order to drop the requirement on missing hardware / nested virtualization.
Also investigating on how to use self hosted runners for providing a build
system with usable virtualization hardware.

2) Using COPR: we have basically the same issue: lorax fails not having
access to the virtualization hardware

3) Using CentOS Community Build System
This is a fully fledged Koji instance and it allows to build using the
image-build variant. It has a completely different configuration system and
it is more similar to what we are doing within the downstream build of
oVirt Node. An attempt of providing the configuration started here:
https://gerrit.ovirt.org/c/ovirt-appliance/+/117801
The issue there is that all the packages needed to be included within Node
and Appliance must be built within CentOS Community Build System build root.
The system has no external access to the internet so everything we need it
needs to come from CentOS infra.

I haven't started digging into oVirt Node but the build flow is very
similar to the appliance one, so once one is solved, the other should be
simple.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KVIBTHT2SQ5D7QAMDO2TBXBHELHWWCB7/


[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-02 Thread Milan Zamazal
Michal Skrivanek  writes:

>> On 1. 12. 2021, at 16:57, Nir Soffer  wrote:
>> 
>> On Wed, Dec 1, 2021 at 11:38 AM Milan Zamazal  wrote:
>>> 
>>> Michal Skrivanek  writes:
>>> 
 Hi all,
 so far we haven't encounter any blocking issue with this effort, I
 wanted to propose to decide on oVirt development moving to GitHub,
 COPR and CBS. Recent issue with decommissioning of our CI datacenter
 is a good reminder why we are doing that...
 What do we want to do?
 1) move "ovirt-master-snapshot" compose to COPR
  it is feasible for all projects except ovirt-node and
 appliance due to COPR limitations, for these two we plan to use a
 self-hosted runner in github env.
  it replaces the "build-artifacts" stdci stage
 2) move release to CentOS Community Build System to simplify our oVirt 
 releases
  replaces our custom releng-tools process and aligns us better
 with CentOS that is our main (and only) platform we support.
 3) move development from Gerrit to GitHub
  this is a very visible change and affects every oVirt
 developer. We need a way how to test posted patches and the current
 stdci "check-patch" stage is overly complex and slow to run, we lack
 people for stdci maintenance in general (bluntly, it's a dead
 project). Out of the various options that exist we ended up converging
 to Github. Why? Just because it's the most simple thing to do for us,
 with least amount of effort, least amount of additional people and hw
 resources, with a manageable learning curve. It comes at a price - it
 only works if we switch our primary development from Gerrit to Github
 for all the remaining projects. It is a big change to our processes,
 but I believe we have to go through that transition in order to solve
 our CI troubles for good.  We started preparing guide and templates to
 use so that we keep a uniform "look and feel" for all sub-projects, is
 shall be ready soon.
 
 I'd like us to move from "POC" stage to "production", and actively
 start working on the above, start moving project after project.
 Let me ask for a final round of thoughts, comments, objections, we are 
 ready to go ahead.
>>> 
>>> Hi,
>>> 
>>> the Vdsm maintainers have discussed the possibility of moving Vdsm
>>> development to GitHub and we consider it a reasonable and feasible
>>> option.  Although GitHub is not on par with Gerrit as for code reviews,
>>> having a more reliable common development platform outweighs the
>>> disadvantages.  There is already an ongoing work on having a fully
>>> usable Vdsm CI on GitHub.
>>> 
>>> One thing related to the move is that we would like to retain the
>>> history of code reviews from Gerrit.  The comments there contain
>>> valuable information that we wouldn't like to lose.  Is there a way to
>>> export the public Gerrit contents, once we make a switch to GitHub for
>>> each particular project, to something that could be reasonably used for
>>> patch archaeology when needed?
>> 
>> I think keeping a readonly instance would be best, one all projects
>> migrated to github.
>
>
>> 
>> I hope there is a way to export the data to static html so it will be
>> available forever without running an actual gerrit instance.
>
> no idea...it can definitely be scraped patch after patch..but it's
> going to be really huge and again, it will keep running, there's no
> plan to shut it down or anything.
> gerrit.ovirt.org will stay up 

And working / properly maintained?  Can you guarantee it will remain
usable for the relevant purposes?  If yes then it would be indeed the
best option.

> for as long as it's needed and relevant. If it ever comes to shutting
> it down I don't think there's going to be anyone caring about the
> comments
>
>> 
>> Nir
>> 
>>> 
 It's not going to be easy, but I firmly believe it will greatly
 improve maintainability of oVirt and reduce overhead that we all
 struggle with for years.
 
 Thanks,
 michal
 
> On 10. 11. 2021, at 9:17, Sandro Bonazzola  wrote:
> 
> Hi, here's an update on what has been done so far and how it is going.
> 
> COPR
> All the oVirt active subprojects are now built on COPR except oVirt
> Engine Appliance and oVirt Node: I'm still looking into how to build
> them on COPR.
> 
> Of those subprojects only the following are not yet built
> automatically on patch merge event as they have pending patches for
> enabling the automation:
> - ovirt-engine-nodejs-modules:
> https://gerrit.ovirt.org/c/ovirt-engine-nodejs-modules/+/117506
> -
> ovirt-engine-ui-extensions:
> https://gerrit.ovirt.org/c/ovirt-engine-ui-extensions/+/117512
> 
> - ovirt-web-ui: https://github.com/oVirt/ovirt-web-ui/pull/1532

[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-02 Thread Michal Skrivanek


> On 1. 12. 2021, at 16:57, Nir Soffer  wrote:
> 
> On Wed, Dec 1, 2021 at 11:38 AM Milan Zamazal  wrote:
>> 
>> Michal Skrivanek  writes:
>> 
>>> Hi all,
>>> so far we haven't encounter any blocking issue with this effort, I
>>> wanted to propose to decide on oVirt development moving to GitHub,
>>> COPR and CBS. Recent issue with decommissioning of our CI datacenter
>>> is a good reminder why we are doing that...
>>> What do we want to do?
>>> 1) move "ovirt-master-snapshot" compose to COPR
>>>  it is feasible for all projects except ovirt-node and
>>> appliance due to COPR limitations, for these two we plan to use a
>>> self-hosted runner in github env.
>>>  it replaces the "build-artifacts" stdci stage
>>> 2) move release to CentOS Community Build System to simplify our oVirt 
>>> releases
>>>  replaces our custom releng-tools process and aligns us better
>>> with CentOS that is our main (and only) platform we support.
>>> 3) move development from Gerrit to GitHub
>>>  this is a very visible change and affects every oVirt
>>> developer. We need a way how to test posted patches and the current
>>> stdci "check-patch" stage is overly complex and slow to run, we lack
>>> people for stdci maintenance in general (bluntly, it's a dead
>>> project). Out of the various options that exist we ended up converging
>>> to Github. Why? Just because it's the most simple thing to do for us,
>>> with least amount of effort, least amount of additional people and hw
>>> resources, with a manageable learning curve. It comes at a price - it
>>> only works if we switch our primary development from Gerrit to Github
>>> for all the remaining projects. It is a big change to our processes,
>>> but I believe we have to go through that transition in order to solve
>>> our CI troubles for good.  We started preparing guide and templates to
>>> use so that we keep a uniform "look and feel" for all sub-projects, is
>>> shall be ready soon.
>>> 
>>> I'd like us to move from "POC" stage to "production", and actively
>>> start working on the above, start moving project after project.
>>> Let me ask for a final round of thoughts, comments, objections, we are 
>>> ready to go ahead.
>> 
>> Hi,
>> 
>> the Vdsm maintainers have discussed the possibility of moving Vdsm
>> development to GitHub and we consider it a reasonable and feasible
>> option.  Although GitHub is not on par with Gerrit as for code reviews,
>> having a more reliable common development platform outweighs the
>> disadvantages.  There is already an ongoing work on having a fully
>> usable Vdsm CI on GitHub.
>> 
>> One thing related to the move is that we would like to retain the
>> history of code reviews from Gerrit.  The comments there contain
>> valuable information that we wouldn't like to lose.  Is there a way to
>> export the public Gerrit contents, once we make a switch to GitHub for
>> each particular project, to something that could be reasonably used for
>> patch archaeology when needed?
> 
> I think keeping a readonly instance would be best, one all projects
> migrated to github.


> 
> I hope there is a way to export the data to static html so it will be
> available forever without running an actual gerrit instance.

no idea...it can definitely be scraped patch after patch..but it's going to be 
really huge and again, it will keep running, there's no plan to shut it down or 
anything.
gerrit.ovirt.org will stay up for as long as it's needed and relevant. If it 
ever comes to shutting it down I don't think there's going to be anyone caring 
about the comments

> 
> Nir
> 
>> 
>>> It's not going to be easy, but I firmly believe it will greatly
>>> improve maintainability of oVirt and reduce overhead that we all
>>> struggle with for years.
>>> 
>>> Thanks,
>>> michal
>>> 
 On 10. 11. 2021, at 9:17, Sandro Bonazzola  wrote:
 
 Hi, here's an update on what has been done so far and how it is going.
 
 COPR
 All the oVirt active subprojects are now built on COPR except oVirt
 Engine Appliance and oVirt Node: I'm still looking into how to build
 them on COPR.
 
 Of those subprojects only the following are not yet built
 automatically on patch merge event as they have pending patches for
 enabling the automation:
 - ovirt-engine-nodejs-modules:
 https://gerrit.ovirt.org/c/ovirt-engine-nodejs-modules/+/117506
 -
 ovirt-engine-ui-extensions:
 https://gerrit.ovirt.org/c/ovirt-engine-ui-extensions/+/117512
 
 - ovirt-web-ui: https://github.com/oVirt/ovirt-web-ui/pull/1532
 
 
 You can see the build status for the whole project here:
 https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/monitor/
 

[ovirt-devel] Re: COPR's ovirt-master-snapshot replacing resources.ovirt.org's "tested" repo

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 12:07, Michal Skrivanek  
> wrote:
> 
> Hi all,
> COPR[1] is effectively replacing the "tested" repo - the one that gets all 
> built artifacts after a patch is merged (and is actually not tested:)
> 
> Sandro started to move projects to COPR for merged patches some time ago and 
> we believe it's complete except appliance and node.
> So we can all start switching to it as it is currently more up to date anyway
> Please start modifying your CI setup, other automation, your own updates, 
> wherever really, and just switch from [2] to [3] (adjust the platform/arch 
> accordingly as well, so e.g. [4])
> 
> Thanks,
> michal
> 
> [1] https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
> [2] https://resources.ovirt.org/repos/ovirt/tested/master/rpm/
> [3] 
> https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/
> [4] 
> https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-9-x86_64

also, for most cases I guess it makes most sense to use the DNF vars  
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-$releasever-$basearch/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VPFGRVRMXP4I6OCRYIJ7W6LH2YIL5CB5/


[ovirt-devel] COPR's ovirt-master-snapshot replacing resources.ovirt.org's "tested" repo

2021-12-02 Thread Michal Skrivanek
Hi all,
COPR[1] is effectively replacing the "tested" repo - the one that gets all 
built artifacts after a patch is merged (and is actually not tested:)

Sandro started to move projects to COPR for merged patches some time ago and we 
believe it's complete except appliance and node.
So we can all start switching to it as it is currently more up to date anyway
Please start modifying your CI setup, other automation, your own updates, 
wherever really, and just switch from [2] to [3] (adjust the platform/arch 
accordingly as well, so e.g. [4])

Thanks,
michal

[1] https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
[2] https://resources.ovirt.org/repos/ovirt/tested/master/rpm/
[3] 
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/
[4] 
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-9-x86_64
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IRX43SNXNNG3TUZMV6BH427PKVTNIJRI/