Bumping the thead, upstream patches are merged now [0] With current upstream code, I can generate an image from master packages with: $ wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 --selinux-relabel --run-command 'yum-config-manager --add-repo http://trunk.rdoproject.org/centos7/delorean-deps.repo' $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 --selinux-relabel --run-command 'yum-config-manager --add-repo https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo' $ DIB_LOCAL_IMAGE=/home/stack/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 /opt/stack/octavia/diskimage-create/diskimage-create.sh -p -i centos -o amphora-x64-haproxy-centos.qcow2
This is with devstack, but will be mostly the same when RDO packages are updated (just the script location that then comes from openstack-octavia-diskimage-create package) So what are the next steps here? missing information, place to track this, item for next meeting, action items, … ? [0] https://review.openstack.org/#/c/522626/ On 12 January 2018 at 13:05, Bernard Cafarelli <bcafa...@redhat.com> wrote: > On 11 January 2018 at 11:53, Javier Pena <jp...@redhat.com> wrote: >> ----- Original Message ----- >>> On Wed, Jan 10, 2018 at 7:50 PM, Javier Pena <jp...@redhat.com> wrote: >>> > If we want to deliver via RPM and build on each Octavia change, we could >>> > try to add it to the octavia spec and build it using DLRN. Does the script >>> > require many external resources besides diskimage-builder? >>> > I'm not sure if that would work on CBS though, if we need to have network >>> > connectivity during the build process. > I looked a bit initially into building the image directly in spec, one > problem was how to pass the needed RDO packages properly to > diskimage-builder (as a repo so that yum pulls them in). > Apart from some configuration tweaks, most of the steps sum up to yum > calls (system update - install haproxy, keepalived, … - install > openstack-octavia-amphora-agent), these need network access, or at > least local mirrors. >>> >>> I would be concerned with the storage required, also we need to >>> trigger not only on Octavia distgit or upstream changes, all included >>> RPMs need to be checked checked for updates. >>> This could be simulated with dummy commits in distgit to force e.g. >>> nightly refresh but due to storage requirements, I'd keep image builds >>> outside trunk repos. >>> >> >> I have been doing some tests, and it looks like running diskimage-builder >> from a chroot is not the best idea (it tries to mount some tmpfs and fails), >> so even if we solved the storage issue it wouldn't work. >> I think our best chance is to create a periodic job to rebuild the images >> (daily) then upload them to images.rdoproject.org. This would be a similar >> approach to what we are currently doing with containers. > That would work for "keeping other packages up to date" too > >> The only drawback of this alternative is that we would be distributing the >> qcow2 images instead of an RPM package, but we could still apply retention >> policies, and add some CI jobs to test them if needed. > On disk usage and retention polices, the images I build locally (with > CentOS) are 500-500 MB qcow2 files > > -- > Bernard -- Bernard Cafarelli _______________________________________________ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org